close
DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Image Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones Build AI Agents That Are Ready for Production
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones
Build AI Agents That Are Ready for Production

New Trend Report "Security by Design": Learn AI-powered threat detection, SBOM adoption, and much more❗️

Join our live webinar on May 5 to learn how to build and run LLM inference on Kubernetes without the guesswork.

JavaScript

JavaScript (JS) is an object-oriented programming language that allows engineers to produce and implement complex features within web browsers. JavaScript is popular because of its versatility and is preferred as the primary choice unless a specific function is needed. In this Zone, we provide resources that cover popular JS frameworks, server applications, supported data types, and other useful topics for a front-end engineer.

icon
Latest Premium Content
Trend Report
Low-Code Development
Low-Code Development
Refcard #363
JavaScript Test Automation Frameworks
JavaScript Test Automation Frameworks
Refcard #288
Getting Started With Low-Code Development
Getting Started With Low-Code Development

DZone's Featured JavaScript Resources

How We Reduced LCP by 75% in a Production React App

How We Reduced LCP by 75% in a Production React App

By Satyam Nikhra
We recently launched a brand new customer-facing React application when we started receiving customer complaints. Pages were loading slowly and users were frustrated. Customers were churning. As we dug into our internal metrics, it became clear that things were even worse than we realized. Our app fell in the bottom five of 27 apps for our organization. Our performance metrics reflected the same story. Our LCP for the 75th percentile was 7.7 seconds. Most users were staring at a loading screen for multiple seconds before they could interact with a page. What is LCP (Largest Contentful Paint) ? Largest Contentful Paint (LCP) is a Core Web Vitals metric that measures how long it takes for the main content of a page to become visible to the user. By this, it signifies the time that users assume that the page has fully loaded. For most pages, the LCP element is typically one of the following: A large image or hero bannerA video poster imageA large block of textA prominent product image LCP is especially important because it focuses on perceived load time, not just when the page technically finishes loading. According to Core Web Vitals guidance: Good: ≤ 2.5 secondsNeeds Improvement: 2.5–4.0 secondsPoor: > 4.0 seconds How to measure LCP using Chrome Lighthouse Launch the page in Google ChromeOpen DevTools (Cmd + Option + I on macOS or Ctrl + Shift + I on Windows)Navigate to the Lighthouse tabSelect Performance and run the audit After the report was created, Lighthouse showcased the Largest Contentful Paint metric with the individual element triggering LCP. Thus, it easily detectable that the LCP was triggered by either a big image, a text block, or a delayed rendering caused by JavaScript or network requests. Lighthouse was used as the main tool to find bottlenecks and locally test the corrections, the final assessment though was through the 75th percentile LCP data from actual users. LCP of Amazon The Reason We Didn't Detect the LCP Problem in Non-Production Environments The central issue that was raised frequently during the inquiry was that why wasn't the performance issue apparent before the application got to production. The main reason is that our non-production environments did not copy the real-life situation. In the case of staging, we tested it with a fixed, limited dataset that was already in cache and had newer data. Besides, all third-party integrations were directed to the sandbox environments which always returned cached responses. Hence, the network latency and cold-start behavior were partly invisible. Right away, our 75th percentile LCP in staging was approximately ~3.2 seconds, which was actually felt as acceptable for a first release, and no one even considered it a critical aspect. Conversely, in production, the situation was drastically different: larger datasets, uncached requests, and slower third-party responses all went directly into the critical rendering path. What We Tried First and Why It Didn't Help 1. Memoizing React Components Our first reaction was to make the optimizations at the level of React components. We introduced React.memo, useMemo, and useCallback in multiple components that were having high re-rendering. Example using React.memo This prevents re-renders when props do not change. TypeScript-JSX type VehicleCardProps = { vehicle: Vehicle; onSelect: (id: string) => void; }; const VehicleCard = ({ vehicle, onSelect }: VehicleCardProps) => { return ( <div> <img src={vehicle.imageUrl} alt={vehicle.name} /> <h3>{vehicle.name}</h3> <button onClick={() => onSelect(vehicle.id)}> Select </button> </div> ); }; export default React.memo(VehicleCard); Example using useMemo This avoids recomputing expensive calculations on every render. JSX const formattedPrice = useMemo(() => { return formatCurrency(vehicle.price); }, [vehicle.price]); Example using useCallback This ensure function only gets reinitialized when props changes. JSX const handleSelect = useCallback( (id: string) => { setSelectedVehicleId(id); }, [] ); Why This Didn’t Improve LCP Much LCP was mainly affected by network, bundle size, and image loading, not React re-renders.Memoization was CPU work after load but not the initial render Takeaways Component memoization is definitely advantageous, yet it won't repair LCP issues which are caused by oversized bundles or sluggish network requests. 2. Lazy Loading UI Components Next, we tried lazy-loading parts of the UI using React.lazy and Suspense. TypeScript-JSX const HeavyComponent = React.lazy(() => import('./HeavyComponent')); Why This Didn’t help much The main content's rendering was only possible through the use of all the vital UI componentsWe were unable to present any meaningful content until the complete loading of all components Takeaways Lazy loading facilitates only when non-critical UI can be postponed. If all items are initially needed, it will not lessen LCP. What Actually Worked 1. Shrinking Bundle Size with Tree Shaking After conducting a bundle analysis, we stumbled upon surprising results. JavaScript // webpack.config.js const { BundleAnalyzerPlugin } = require('webpack-bundle-analyzer'); plugins: [ new BundleAnalyzerPlugin() ] A few libraries, in particular, were taking up a big part of the bundle even if we were using only a couple of their functions. The most significant contributor was lodash. What we did to fix We replaced full imports with scoped imports JavaScript // replaced this import _ from 'lodash'; // to this import debounce from 'lodash/debounce'; In a few cases, we configured dependencies to be installed with a lighter optionAdjusted the Webpack configuration to guarantee the right tree shaking Result LCP improved by around ~1.2 seconds. Takeaways Bundle size is more important than component-level optimizations for LCP. 2. Image Optimization and Smarter Loading Our application is selling cars online which means we have to show lot of vehicle images and these images were coming from third party service. What we discovered Images were much higher resolution than neededFile sizes were unnecessarily large and was in .png formatAll images were loading eagerly What we did to fix 1. Converted images to WebP format using sharp npm module JavaScript import sharp from 'sharp'; sharp(inputBuffer) .resize(800) .toFormat('webp') .toBuffer(); 2. Served responsive image sizes based on rendering screen size HTML <img src="car-800.webp" srcset="car-400.webp 400w, car-800.webp 800w" sizes="(max-width: 600px) 400px, 800px" loading="lazy" /> 3. Lazy-loaded images in carousels Load only the first few visible imagesLoad the next set as the user continues scrolling or sliding Result LCP improved by around ~1 seconds. Takeaways Image optimization is one of the highest ROI LCP improvements. 3. Getting Rid of Sequential API Calls We diligently tracked the API calls made at the time the webpage is loaded initially and found a chain of requests that are sequential: API A → API B → API C → API D Every request required the preceding reply, which finally resulted in: Multiple rounds of network trips.Repeated authentication checks Multiple database reads Dependency was the reason parallelizing was impossible. What we did to fix: We amalgamated the logic of sequential api's under one backend workflow API. JavaScript // Instead of multiple calls from frontend GET /api/workflow/initial-data This api: Coordinated service calls behind the scenesCombined business logicDelivered a single aggregated response back to the frontend Result LCP improved by around ~1.4 seconds. Additional Advantages Less frequently database readsLight auth server loadEasier frontend logic to understand 4. Caching the Responses of Third-party APIs that are Slow A third-party API frequently used for pricing was constantly slow and would generally take 2-3 seconds for every request. What we did to fix: We had to cache it on the server side through Redis JavaScript // Pseudo-code if (cache.exists(key)) { return cache.get(key); } const response = await thirdPartyApi.fetch(); cache.set(key, response, TTL); We created a job that would run at night to delete the data that will soon be expired JavaScript // Nightly job cron.schedule('0 0 * * *', refreshExpiringCache); Result LCP improved by around ~2-3 seconds. Takeaways When slow third-party APIs are crucial for your project, caching is a must-have. Key Learnings LCP isn't merely a metric of frontend rendering; it also indicates the total effect of JavaScript, APIs, images, and backend performance altogether. Thus, the advancements had to entail adjustments in both frontend and backend systems. More
Faster Releases With DevOps: Java Microservices and Angular UI in CI/CD

Faster Releases With DevOps: Java Microservices and Angular UI in CI/CD

By Kavitha Thiyagarajan
In modern DevOps workflows, automating the build-test-deploy cycle is key to accelerating releases for both Java-based microservices and an Angular front end. Tools like Jenkins can detect changes to source code and run pipelines that compile code, execute tests, build artifacts, and deploy them to environments on AWS. A fully automated CI/CD pipeline drastically cuts down manual steps and errors. As one practitioner notes, Jenkins is a powerful CI/CD tool that significantly reduces manual effort and enables faster, more reliable deployments. By treating the entire delivery pipeline as code, teams get repeatable, versioned workflows that kick off on every Git commit via webhooks or polling. Jenkins Pipelines as Code Jenkins pipelines allow defining build, test, and deploy stages in a Jenkinsfile so that CI/CD is truly “pipeline-as-code.” When developers push changes to Git, Jenkins can automatically start the pipeline. A typical Declarative Pipeline might look like: Groovy pipeline { agent any stages { stage('Build') { steps { /* build steps here */ } } stage('Test') { steps { /* test steps here */ } } stage('Deliver'){ steps { /* deploy steps here */ } } } } This approach version controls the CI/CD logic along with the application code. Each stage appears in the Jenkins UI, showing real-time status. Plugins extend Jenkins in many ways: NodeJS plugin lets a pipeline use a named Node installation to run npm or ng commands, and the Amazon ECR plugin provides steps to authenticate and push Docker images to AWS ECR. Building Java Microservices For Java microservices, a common pipeline starts with a Maven or Gradle build. For instance, a Build stage might run: Shell mvn -B -DskipTests clean package This compiles the code and packages it into a JAR without running tests. Immediately following is a Test stage, running unit tests, and archiving results. In Jenkins, one can even use the JUnit plugin to publish test reports. For example: Groovy stage('Test') { steps { sh 'mvn test' } post { always { junit 'target/surefire-reports/*.xml' } } } This ensures test failures are reported in Jenkins and can stop the pipeline if needed. Static analysis or security scans can be added as additional stages before packaging. In practice, pushing code triggers the pipeline: as one blog describes, When the user pushes code, it triggers [Jenkins]. The Jenkins pipeline builds the code using Maven, runs unit tests, and performs static code analysis. If the code passes, Jenkins builds a Docker image and pushes the image as the artifact. By automating these steps, developers get fast feedback on their changes without manual intervention. Containerizing and Deploying Java Services Microservices are often deployed in containers on AWS. The Jenkins pipeline can build and push Docker images automatically. For example, one might include in the Jenkinsfile: Groovy stage('Build & Tag Docker Image') { steps { sh 'docker build -t myrepo/myservice:latest .' } } stage('Push Docker Image') { steps { sh 'docker push myrepo/myservice:latest' } } Here, each push builds the image and tags it. These commands can use Jenkins credentials or tools like docker.withRegistry to authenticate. In fact, using Jenkins’s Amazon ECR plugin simplifies this for AWS, a pipeline example shows setting an environment { registry = "...amazonaws.com/myRepo"; registryCredential = "ecr-creds" }, then running docker.build() and docker.withRegistry(...) { dockerImage.push() }. Alternatively, one could invoke the AWS CLI, first authenticate (aws ecr get-login-password | docker login ...), then docker push. AWS documentation notes that You can push your container images to an Amazon ECR repository with the docker push command once authentication is done. The CI/CD pipeline can automate creating the ECR repo if needed, tagging the image with the account’s registry URI, and pushing it. A successful pipeline run will result in updated Docker images in ECR ready for deployment. After pushing images, a final Deploy/Deliver stage can use AWS APIs or tools to launch the containers. For example, Jenkins could use kubectl to update an EKS deployment or use AWS CodeDeploy/CodePipeline to roll out new versions. Even simply SSH’ing into an EC2 and running docker run can be automated in a Jenkins pipeline. The key is that committing code automatically packages and publishes the service so teams ship faster with confidence. Building and Deploying the Angular UI The frontend Angular app is typically a static site that runs in the browser. The Jenkins pipeline for Angular is similar but uses NodeJS/NPM. First, configure Jenkins with a NodeJS installation. A pipeline stage might then look like: Groovy stage('Build Angular') { steps { sh 'npm install' sh 'ng build --prod' } } This installs dependencies and runs ng build --prod, creating a production-ready bundle in the dist/ folder. If tests or linting are required, they can be added before the build step. Once built, the static files need to be hosted. A common approach on AWS is to use S3 and CloudFront. In Jenkins, a Deploy stage could use the AWS CLI to sync the dist/ contents to an S3 bucket. For example: Shell aws s3 sync dist/my-app/ s3://my-angular-bucket/ --acl public-read or as shown in a Jenkins pipeline example simply: Shell aws s3 cp ./dist/ --recursive s3://my-bucket/ --acl public-read This command copies the built site to S3, making it publicly accessible. Using CloudFront in front of the bucket delivers the files globally with caching, and Route 53 can point a custom domain to the distribution. In short, Jenkins fully automates the publish step, so every commit to the Angular repo triggers a build and S3 upload. By hosting the Angular app on S3 and CloudFront, the CI/CD pipeline keeps the frontend delivery serverless and scalable. The build scripts are as simple as it gets: just copy the dist folder to S3 on each update. This release-ready static deploy ensures the front end is updated in lockstep with backend services. End-to-End CI/CD on AWS In practice, one Jenkins pipeline can orchestrate both the Java and Angular builds. A multibranch pipeline could build the microservices repositories, push each to Docker/ECR, and also build and deploy the Angular UI repository in parallel. The general flow is: Commit and trigger: A Git push to any service or UI repository triggers Jenkins via webhook or polling.Build stages: Jenkins runs the defined stages. Java repos run Maven/CODE analysis and Docker build; Angular repo runs npm/ng build.Publish artifacts: Backend images are pushed to Amazon ECR (or Docker Hub). The Angular build is pushed to an S3 bucket.Deploy stages: Finally, Jenkins can use AWS CLI, CloudFormation, or deployment scripts to update running services. Even without containers, Jenkins could SSH and deploy JARs to EC2.Verification: Automated tests or smoke tests can run post-deploy to validate the release. Key DevOps practices here include pipeline-as-code, consistent tooling, and immutable artifacts. Because the pipeline is triggered on each change, feedback is immediate, broken builds or tests fail the job early, preventing flawed code from reaching production. At the same time, successful runs deliver a full release-ready bundle. As one summary points out, adopting CI/CD ensures faster, more reliable deployments by cutting manual steps. Summary Using Jenkins for CI/CD of Java microservices and an Angular UI greatly accelerates release cycles. Engineers define build and deploy steps in code, so any commit runs through the same automated process. Java services can be built with Maven, tested, and containerized images are pushed to AWS ECR and deployed on EC2/ECS/EKS. The Angular app is built with the Angular CLI and deployed as a static site to S3. Throughout this, Jenkins provides visibility and control stages for build, test, and deploy, showing real-time status, and any failure halts the pipeline. By integrating with AWS, the pipeline taps into scalable cloud resources. For example, AWS’s ECR supports secure Docker registry workflows, and S3/CloudFront provides effortless frontend hosting. With everything automated, teams achieve the goal of continuous integration and continuous delivery, making each release faster and more reliable. In short, a well-designed Jenkins CI/CD pipeline for Java microservices and Angular ensures that code changes flow swiftly from commit to production with minimal manual overhead More
Intent-Driven AI Frontends: AI Assistance to Enterprise Angular Architecture
Intent-Driven AI Frontends: AI Assistance to Enterprise Angular Architecture
By Lavi Kumar
Reduce Frontend Complexity in ASP.NET Razor Pages Using HTMX
Reduce Frontend Complexity in ASP.NET Razor Pages Using HTMX
By Akash Lomas
Integrating OpenID Connect (OIDC) Authentication in Angular and React
Integrating OpenID Connect (OIDC) Authentication in Angular and React
By Renjith Kathalikkattil Ravindran
Swift: Master of Decoding Messy JSON
Swift: Master of Decoding Messy JSON

I recently came across an interesting challenge involving JSON decoding in Swift. Like many developers, when faced with a large, complex JSON response, my first instinct was to reach for “quick fix” tools. I wanted to see how online resources, various JSON-to-Swift converters, and even modern AI models would handle a messy, repetitive data structure. To be honest, I was completely underwhelmed. The Problem: The “Flat” JSON Nightmare The issue arises when you encounter a legacy API or a poorly structured response that uses “flat” numbered properties instead of clean arrays. Take a look at this JSON sample: JSON { "meals": [ { "idMeal": "52771", "strMeal": "Spicy Arrabiata Penne", "strInstructions": "Bring a large pot of water to a boil...", "strMealThumb": "https://www.themealdb.com/images/media/meals/ustsqw1468250014.jpg", "strIngredient1": "penne rigate", "strIngredient2": "olive oil", "strIngredient3": "garlic", "strIngredient4": "chopped tomatoes", "strIngredient5": "red chilli flakes", // ... this continues up to strIngredient20 "strMeasure1": "1 pound", "strMeasure2": "1/4 cup", "strMeasure3": "3 cloves", // ... this continues up to strMeasure20 } ] } Why Online Converters Fail When I plugged this into standard conversion tools, the result was a maintenance nightmare. They generated a “wall of properties” that looked something like this: Swift struct Meal: Codable { let idMeal: String let strMeal: String let strInstructions: String? let strMealThumb: String? // The repetitive property nightmare let strIngredient1: String? let strIngredient2: String? let strIngredient3: String? // ... let strIngredient20: String? let strMeasure1: String? let strMeasure2: String? let strMeasure3: String? // ... let strMeasure20: String? } Let’s be honest, the code generated by those online tools belongs in the “trash bin” for any serious project. Not only is it unscalable, but imagine the look on your senior developer’s face during a PR review when they see 40+ optional properties. It’s a maintenance nightmare and a blow to your professional reputation. I decided to take control of the decoding process to make it clean, Swifty, and — most importantly — production-ready. Here is how I structured the solution and why it works. The Secret Weapon: Why We Use a Struct for CodingKeys In 99% of Swift tutorials, you see CodingKeys defined as an enum. Enums are great when you know every single key at compile time. But in our case, we have a "flat" JSON with keys like strIngredient1, strIngredient2... up to 20. Writing an enum with 40 cases is not just boring — it’s bad engineering. That is why we use a struct instead. 1. Breaking the Protocol Requirements To conform to CodingKey, a type must handle both String and Int values. By using a struct, we can pass any string into the initializer at runtime. Swift struct CodingKeys: CodingKey { let stringValue: String var intValue: Int? init?(stringValue: String) { self.stringValue = stringValue } // This allows us to map any raw string from the JSON to our logic init(rawValue: String) { self.stringValue = rawValue } init?(intValue: Int) { return nil } // We don't need integer keys here } 2. Mapping “Ugly” Keys to Clean Names You don’t have to stick with the API’s naming conventions inside your app. Notice how I used static var to create aliases. This keeps the rest of the decoding logic readable while keeping the "dirty" API keys isolated inside this struct. Swift static var name = CodingKeys(rawValue: "strMeal") static var thumb = CodingKeys(rawValue: "strMealThumb") static var instructions = CodingKeys(rawValue: "strInstructions") 3. The Power of Dynamic Key Generation This is the part that makes this approach superior to any AI-generated code. We created static functions that use string interpolation to generate keys on the fly. Swift static func strIngredient(_ index: Int) -> Self { CodingKeys(rawValue: "strIngredient\(index)") } static func strMeasure(_ index: Int) -> Self { CodingKeys(rawValue: "strMeasure\(index)") } Instead of hardcoding strIngredient1, strIngredient2, etc., we now have a "key factory." When we loop through 1...20 in our initializer, we simply call these functions. It’s clean, it’s reusable, and it’s significantly harder to make a typo than writing 40 individual cases. 4. Building a Model That Actually Makes Sense The original JSON treats an ingredient and its measurement as two strangers living in different houses. In our app, there are a couple. By nesting a dedicated struct, we fix the data architecture at the source: Swift struct Ingredient: Decodable, Hashable { let id: Int let name: String let measure: String } Why Hashable and the id? I added an id property using the loop index. Why? Because modern SwiftUI views like List and ForEach require identifiable data. By conforming to Hashable, we ensure: No UI glitches: SwiftUI won’t get confused if two different ingredients have the same name (like two different types of “Salt”).Performance: Diffable data sources love hashable objects. 5. Cleaning Up the “API Smell” Before we get to the initializer, look at how we define our main properties. We aren’t just copying what the API gives us; we are translating it into clean Swift. Swift let name: String let thumb: URL? let instructions: String let ingredients: [Ingredient] Goodbye str prefix: We dropped the Hungarian notation. name is better than strMeal.Proper types: We decode the thumbnail directly into a URL?. If the API sends a broken link or an empty string, our decoder handles it gracefully during the parsing phase, not later in the View. 6. The Smart Initializer: Our “Data Bouncer” This is the finale. Instead of blindly accepting every key the JSON offers, our custom init(from:) acts like a bouncer at a club — only valid data gets in. Swift init(from decoder: any Decoder) throws { let container = try decoder.container(keyedBy: CodingKeys.self) // 1. Decode simple properties using our clean aliases self.name = try container.decode(String.self, forKey: .name) self.thumb = try? container.decode(URL.self, forKey: .thumb) self.instructions = try container.decode(String.self, forKey: .instructions) // 2. The Dynamic Decoding Loop var ingredients: [Ingredient] = [] for index in 1...20 { // We use 'try?' because some keys might be null or missing if let name = try? container.decode(String.self, forKey: .strIngredient(index)), let measure = try? container.decode(String.self, forKey: .strMeasure(index)), !name.isEmpty, !measure.isEmpty { // We only save it if the name AND measure are valid and non-empty ingredients.append(Ingredient(id: index, name: name, measure: measure)) } } self.ingredients = ingredients } The Final Result: Clean, Swifty, and UI-Ready After all that work behind the scenes, look at what we’ve achieved. We have transformed a “flat” JSON nightmare into a model that is a joy to use. This is what the rest of your app sees now: Swift struct MealDetail { let name: String let instructions: String let thumb: URL? let ingredients: [Ingredient] } Pure Simplicity in the UI Because we did the heavy lifting during the decoding phase — filtering empty values and grouping ingredients — our SwiftUI code becomes incredibly clean. We don’t need any complex logic in the View; we just map the data directly to the components. The Cherry on Top: Making Mocking Easy You might have noticed one small side effect: when we define a custom init(from: Decoder), Swift stops generating the default memberwise initializer. This can make writing unit tests or SwiftUI Previews a bit annoying. To fix this and keep our codebase “test-friendly,” we can add this simple extension. This allows us to create “Mock” data for our UI without needing a JSON file. Swift extension MealDetail { // Restoring the ability to create manual instances for Mocks and Tests init(name: String, thumb: URL?, instructions: String, ingredients: [Ingredient]) { self.name = name self.thumb = thumb self.instructions = instructions self.ingredients = ingredients } } Now, creating a preview is as simple as: let mock = MealDetail(name: "Pasta", thumb: nil, instructions: "Cook it.", ingredients: []). Conclusion The next time you’re faced with a messy API, remember: Don’t let the backend dictate your front-end architecture. Online tools and AI might give you a quick “copy-paste” solution, but they often lead to technical debt. By taking control of your Decodable implementation, you create code that is: Readable: Clear, intent-based property names.Robust: Filters out empty or corrupt data at the source.Maintainable: Easy to test and easy to display in the UI. Happy coding, and keep your models clean! Full code is here.

By Pavel Andreev
Stranger Things in Java: Enum Types
Stranger Things in Java: Enum Types

This article is part of the series “Stranger things in Java,” dedicated to language deep dives that will help us master even the strangest scenarios that can arise when we program. All articles are inspired by content from the book “Java for Aliens” (in English), the book “Il nuovo Java”, and the book “Programmazione Java.” This article is a short tutorial on enumeration types, also called enumerations or enums. They are one of the fundamental constructs of the Java language, alongside classes, interfaces, annotations, and records. They are particularly useful to represent sets of known and unchangeable values, such as the days of the week or the cardinal directions. What Is an Enum? An enum is declared with the enum keyword and typically contains a list of values, called the elements (or values, or also constants) of the enumeration. Let’s consider, for example: Java public enum CardinalDirection { NORTH, SOUTH, WEST, EAST; } Here, we defined an enum named CardinalDirection, with four elements: NORTH, SOUTH, WEST and EAST. The elements defined in the enumeration are the only possible instances of type CardinalDirection, and it is not possible to instantiate other objects of the same type. Therefore, if we tried to instantiate an object from the CardinalDirection enumeration, we would get a compilation error: Java var d = new CardinalDirection(); // ERROR: you cannot create new instances Elements of an Enumeration Using an enumeration, therefore, mainly means using its elements. For example, the following method returns true if the direction parameter matches the NORTH element of CardinalDirection: Java static boolean isNorth(CardinalDirection direction) { return direction == CardinalDirection.NORTH; } In the following example, instead, we assign references to the elements of CardinalDirection: Java CardinalDirection d1 = CardinalDirection.SOUTH; System.out.println(d1 == CardinalDirection.SOUTH); // true var d2 = CardinalDirection.EAST; System.out.println(d2 == CardinalDirection.WEST); // false Each element is implicitly declared public, static and final. In fact, from these examples we can observe that: To use the elements of an enumeration, you must always refer to them via the name of the enumeration (for example CardinalDirection.SOUTH).We can compare elements directly with the == operator because they are implicitly final and unique.The names of enumeration elements follow the naming conventions for constants. During compilation, the CardinalDirection enumeration is transformed into a class similar to the following: Java public class CardinalDirection { public static final CardinalDirection NORTH = new CardinalDirection(); public static final CardinalDirection SOUTH = new CardinalDirection(); public static final CardinalDirection WEST = new CardinalDirection(); public static final CardinalDirection EAST = new CardinalDirection(); } While the compiler ensures that no other elements can be instantiated besides those declared in the enumeration. * Backward compatibility is a fundamental feature of Java that ensures code written for earlier versions of the platform continues to work on more recent versions of the JVM, without requiring changes. Backward compatibility is one of the main reasons why Java is widely used in enterprise environments and long-lived systems. Why Use Enumerations? One of the main advantages of enumerations is the ability to represent a limited set of values in a safe way. Without an enum, there is a risk of using strings or “magic” numbers, which can introduce errors that are difficult to detect. Let us consider the following example: Java public class Compass { public void move(String direction) { String message; if (direction.equals("NORTH")) { message = "You move north"; } else if (direction.equals("SOUTH")) { message = "You move south"; } else if (direction.equals("WEST")) { message = "You move west"; } else if (direction.equals("EAST")) { message = "You move east"; } else { message = "Invalid direction: " + direction; } System.out.println(message); } } With this approach, it is possible to pass any string, even an invalid one. For example, the value of the direction parameter could be "north" or "North", but it should be "NORTH" in order for the method to work correctly. The compiler cannot help us prevent such errors. In the following code, we use the CardinalDirection enumeration to completely eliminate arbitrary values and delegate to the compiler the validation of the allowed values: Java public class Compass { public void move(CardinalDirection direction) { String message = switch (direction) { case NORTH -> "You move north"; case SOUTH -> "You move south"; case WEST -> "You move west"; case EAST -> "You move east"; }; System.out.println(message); } } In this way: The direction parameter can only take the values defined in the enumeration.It is not possible to specify an invalid value: such an error would be detected at compile time.The switch expression must be exhaustive**, therefore the compiler requires that all alternatives are handled in order to compile without errors. Enumerations make code safer and more readable, because they avoid the use of “magic” values or arbitrary strings to represent concepts that have a limited number of alternatives. ** To learn about the concept of exhaustiveness related to switch expressions, introduced in Java 14, you can read the article entitled “The new switch.” Enumerations and Inheritance We have seen that the compiler transforms the CardinalDirection enum into a class whose elements are implicitly declared public, static, and final. However, we have not yet said that such a class: Is itself declared final. This implies that enumerations cannot be extended.Extends the generic class java.lang.Enum. Consequently, it cannot extend other classes (but it can still implement interfaces). In practice, the declaration of the CardinalDirection class will be similar to the following: Java public final class CardinalDirection extends Enum<CardinalDirection> { // rest of the code omitted } Therefore, we cannot create hierarchies of enumerations in the same way we do with classes. Moreover, all enumerations inherit: The methods declared in the Enum class.The methods and properties of the Serializable and Comparable interfaces, which are implemented by Enum.The methods from the Object class. However, in the last paragraph of this tutorial, we will see how it is possible, in some sense, to extend an enumeration. Methods Inherited From the Enum Class By extending Enum, enumerations inherit several methods: name: returns the name of the element as a string (it cannot be overridden because it is declared final).toString: returns the same value as name, but it can be overridden.ordinal: returns the position of the element in the enumeration starting from index 0 (it is declared final).valueOf: a static method that takes a String as input and returns the enumeration element corresponding to the name.values: a static method not actually present in java.lang.Enum, but generated by the compiler for each enumeration. It returns an array containing all enumeration elements in the order in which they are declared. For example, the name method is defined to return the element name, so: Java System.out.println(CardinalDirection.SOUTH.name()); // prints SOUTH will print the string "SOUTH". The equivalent toString method also returns the enum name, so the instruction: Java System.out.println(CardinalDirection.SOUTH); // prints SOUTH produces exactly the same result (since println calls toString on the input object). The difference is that the name method cannot be overridden because it is declared final, while the equivalent toString method can always be overridden. Enum also declares a method complementary to toString: the static method valueOf. It takes a String as input and returns the corresponding enumeration value. For example: Java CardinalDirection direction = CardinalDirection.valueOf("NORTH"); System.out.println(direction == CardinalDirection.NORTH); // prints true The special static values method returns an array containing all enumeration elements in the order in which they were declared. You can use this method to iterate over the values of an enumeration you do not know. For example, using an enhanced for loop, we can print the contents of the CardinalDirection enumeration, also introducing the ordinal method: Java for (CardinalDirection cd : CardinalDirection.values()) { System.out.println(cd + "\t is at position " + cd.ordinal()); } Note that the ordinal method (also declared final) returns the position of an element within the array returned by values. The output of the previous example is therefore: Java NORTH is at position 0 SOUTH is at position 1 WEST is at position 2 EAST is at position 3 For completeness, the Enum class actually declares two other, less interesting methods: getDeclaringClass: returns the Class object associated with the enum type to which the value on which the method is invoked belongs.describeConstable: a method introduced in Java 12 to support advanced constant descriptions for low-level APIs. This is a specialized API that is not used in traditional application development. Methods and Properties Inherited From the Serializable and Comparable Interfaces The Enum class implements the Serializable and Comparable interfaces, and consequently, enumeration objects have the properties of being serializable and comparable. While the marker interface Serializable does not contain methods, the functional interface Comparable makes the natural ordering of the elements of an enumeration coincide with the order in which the elements are defined within the enumeration itself. This means that the only abstract method compareTo of the Comparable interface determines the ordering of two enumeration objects based on the position of the objects within the enumeration. Note that the compareTo method is declared final in the Enum class, and therefore it cannot be overridden in our enumerations. Methods Inherited From the Object Class Enumerations inherit all 11 methods of the Object class. In particular, as already mentioned, we can override the toString method. The other methods we usually override, such as equals, hashCode, and clone, are instead declared final and therefore cannot be overridden. In fact, to compare two enumeration instances, it is sufficient to use the == operator, since enumeration values are constants, and therefore, there is no need to redefine the equals and hashCode methods. Moreover, enumerations cannot be cloned, since their elements must be the only possible instances. In this case as well, the java.lang.Enum class declares the clone method inherited from Object as final. Being also declared protected, it is not even visible outside the java.lang package of Enum. Customizing an Enumeration Since it is transformed into a class, an enumeration can declare everything that can be declared in a class (methods, variables, nested types, initializers, and so on), with some constraints on constructors. In fact, constructors are implicitly considered private and can only be used by the enumeration elements through a special syntax. Compilation will instead fail if we try to create new instances using the new operator. For example, the following code redefines the CardinalDirection enumeration: Java public enum CardinalDirection { NORTH("north"), // invokes constructor 2 SOUTH("south"), // invokes constructor 2 WEST("west"), // invokes constructor 2 EAST; // invokes constructor 1 // equivalent to EAST() // instance variable private String description; // constructor number 1 private CardinalDirection() { this("east"); // calls constructor 2 } // constructor number 2 (implicitly private) CardinalDirection(String direction) { setDescription("direction " + direction); } public String getDescription() { return description; } public void setDescription(String description) { this.description = description; } @Override public String toString() { return "We are pointing " + description; } } We can observe that: We declared two constructors in the enumeration: one explicitly private and the other implicitly private. It is not possible to declare constructors with public or protected visibility. Apart from this, the same rules that apply to class constructors also apply here. As with classes, if we do not declare any constructors, the compiler will add a no-argument constructor for us (the default constructor). Also, as with classes, the default constructor will not be generated if we explicitly declare at least one constructor, as in the previous example.The enumeration elements invoke the declared constructors using a special syntax. In the declaration of the NORTH, SOUTH, and WEST elements, we added a pair of parentheses and passed a string parameter. This ensures that constructor number 2, which takes a String parameter, is invoked when these instances are created. The EAST element, instead, does not use parentheses and therefore invokes the no-argument constructor. Note that we could also have added an empty pair of parentheses to obtain the same result.The declaration of the enumeration elements must always precede all other declarations. If we placed any declaration before the element list, compilation would fail. Note that the semicolon after the element list is optional if no other members are declared. Static Nested Enumerations and Static Imports It is not uncommon to create nested enumerations, which, unlike nested classes, are always static. For example, suppose we want to create an enumeration that defines the possible account types (for example "standard" and "premium") for customers of an online shop. Since the account type is strictly related to the concept of an account, it makes sense to declare the Type enumeration nested within the Account class: Java package com.online.shop.data; public class Account { public enum Type {STANDARD, PREMIUM} // static nested enum // other code omitted... public static void main(String[] args) { System.out.println(Type.PREMIUM); // access to the static enumeration } } If instead we want to print an enumeration element from outside the Account class, we can use the following syntax: Java System.out.println(Account.Type.PREMIUM); Of course, we can also use static import when appropriate, for example: Java import static com.online.shop.data.Account.Type; // ... System.out.println(Type.PREMIUM); Or even: Java import static com.online.shop.data.Account.Type.PREMIUM; // ... System.out.println(PREMIUM); Enumerations and static import were both introduced in Java 5. static import, in fact, allows us to reduce verbosity when using enumerations. Extending an Enumeration We know that we cannot extend an enumeration; however, it is possible to use the anonymous class syntax for each element, redefining the methods declared in the enumeration. We can define methods in the enumeration and override them in its elements. Let us rewrite the CardinalDirection enumeration once again: Java public enum CardinalDirection { NORTH { @Override public void test() { System.out.println("method of NORTH"); } }, SOUTH, WEST, EAST; public void test() { System.out.println("method of the enum"); } } Here, we defined a method called test that prints the string "method of the enum". The NORTH element, however, using a syntax similar to that of anonymous classes, also declares the same method, overriding it. In fact, the compiler will turn the NORTH element into an instance of an anonymous class that extends CardinalDirection. Therefore, the statement: Java CardinalDirection.NORTH.test(); Will print: Java method of NORTH While the statement: Java CardinalDirection.SOUTH.test(); Will print: Java method of the enum Because SOUTH does not override test. The same output will be produced when invoking the test method on the EAST and WEST elements as well. Enumerations and Polymorphism After examining the relationship between enumerations and inheritance, we can now use enumerations by exploiting polymorphism in a more advanced way. For example, let us consider the following Operation interface: Java public interface Operation { boolean execute(int a, int b); } We can implement this interface within an enumeration Comparison, customizing the implementation of the execute method for each element: Java public enum Comparison implements Operation { GREATER { public boolean execute(int a, int b) { return a > b; } }, LESS { public boolean execute(int a, int b) { return a < b; } }, EQUAL { public boolean execute(int a, int b) { return a == b; } }; } With this structure, we can write code such as the following: Java boolean result = Comparison.GREATER.execute(10, 5); System.out.println("10 greater than 5 = " + result); result = Comparison.LESS.execute(10, 5); System.out.println("10 less than 5 = " + result); result = Comparison.EQUAL.execute(10, 5); System.out.println("10 equal to 5 = " + result); Which will produce the following output: Java 10 greater than 5 = true 10 less than 5 = false 10 equal to 5 = false Implementing an interface in an enumeration allows you to associate a behavior with each enum value and exploit polymorphism, making the code more extensible, readable, and robust. Conclusion Enumerations are particularly useful when: The domain of values is closed and known in advance, such as cardinal directions, object states, days of the week, priority levels, and so on.You want to make code safer by eliminating arbitrary strings or “magic” numbers.Each element must be able to have specific properties or methods, while maintaining clarity and readability. They are less suitable when: The elements can vary dynamically over time, for example, if they come from a database or external configurations.You want to model an extensible hierarchy of types, for which classes and interfaces remain more flexible solutions. In this article, we have seen that enumerations are not simply lists of constants, but real classes with predefined instances, methods inherited from Enum, the ability to implement interfaces, and even the possibility to redefine behavior for individual values through anonymous classes. These aspects make enum a surprisingly powerful and, in some cases, unexpected tool: a perfect example of stranger things in Java. Author’s Note This article is based on some paragraphs from chapters 4 and 7 of my book “Programmazione Java” and from my English book “Java for Aliens.”

By Claudio De Sio Cesari
Beyond the Chatbot: Engineering a Real-World GitHub Auditor in TypeScript
Beyond the Chatbot: Engineering a Real-World GitHub Auditor in TypeScript

AI agents have taken the world by storm and are making positive gains in all domains such as healthcare, marketing, software development, and more. The chief reason for their prominence lies in being able to automate routine tasks with intelligence. For example, in software development, stories and bugs have automated tracking in tools such as GitHub, Rally, and Jira; however, this automation lacks intelligence, often requiring engineers and project managers to triage them. Using an AI agent, as you will learn in this article, smart triaging can be carried out using generative AI. AI agents can be developed using many techniques and in several programming languages. Python has been a leader in the AI and ML space, whereas JavaScript has been the undisputed king in web development and has been prominent in back-end development as well. Historically, popular AI agent development frameworks have had their roots in Python, but their JavaScript ports have become mature in the recent past. This emergence allows a large number of JavaScript engineers to create their own AI agents without switching stacks. This article focuses on this shift, showing you how to develop an AI agent using JavaScript. Before you learn the agent, it is important to understand the similarities between an AI agent and a pure LLM, though. Basic Anatomy of an Agent An AI agent is capable of going beyond just querying an LLM for an answer. Before returning an answer, the agent takes several autonomous actions: Unlike an LLM, an agent can follow tasks via a loop (known as ReAct) where it can plan, observe, and rerun to get the output of a task before a final answer.An agent can query data, run commands on a terminal, call an API, and more. This feature is known as tool calling.Furthermore, an agent remembers details through extended memory and can build a longer working memory by leveraging vector databases.The agent can handle failures and initiate smart error handling and self-corrections. Please refer to the illustration below to understand an AI agent. Please note that task status is technically part of the agent only, but here it is used to clearly highlight the ReAct loop. Image: Basic Anatomy of an Agent In the next section, you will implement an actual AI agent to understand each of these parts in more detail. Scaffolding a New Project You are going to build this project using TypeScript, which is a superset of JavaScript. You get all the freedom of JavaScript while gaining extra checks to maintain structure. Additionally, TypeScript helps avoid common JavaScript errors caused by a lack of typing, a feature that most modern programming languages support. Create a new project by running the npm init command in your terminal. This will initialize the project and generate a package.json file in the project root. Replace the content of your package.json with the following: JSON { "name": "github-agent", "version": "1.0.0", "type": "module", "main": "index.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1", "start": "npx tsx src/agent.ts" }, "keywords": [], "author": "", "license": "ISC", "description": "", "dependencies": { "@langchain/community": "^1.1.1", "@langchain/core": "^1.1.8", "@langchain/google-genai": "^2.1.3", "@langchain/openai": "^1.2.0", "@octokit/core": "^7.0.6", "dotenv": "^16.6.1", "langchain": "^1.1.0", "octokit": "^5.0.5" }, "devDependencies": { "@types/node": "^25.0.3", "ts-node": "^10.9.2", "typescript": "^5.9.3" } } There are several dependencies defined in the package.json: LangChain family: Core dependencies that provide the framework for building your agent.Octokit: Leveraged to authenticate with GitHub when the agent needs to access current issues.Dotenv: Helps you parameterize the project by managing sensitive API keys.Dev dependencies: Mainly focus on converting TypeScript code to JavaScript for final execution. Run npm install to actually install these dependencies. Before writing the logic, create a .env file in your root directory. This is where you store sensitive credentials. I have included an .env.example file for your reference. You can rename it to .env and replace details with your personal details. Ensure to never commit this file. TypeScript GOOGLE_API_KEY=<GOOGLE-API-KEY> GITHUB_TOKEN=<GITHUB_TOKEN> With these settings, you are now ready to build the actual agent. Creating the GitHub Agent Step 1: Build the Tool The first thing you will need is the resources the agent is going to need to communicate with GitHub to fetch the issues. These resources are known as tools, as you learned earlier. Create a directory in the root of the project and name it src. Inside the src directory, create another directory named tools. Inside this tools directory, create a file named github.ts and add the code below: TypeScript import { tool } from "@langchain/core/tools"; import { Octokit } from "@octokit/core"; const octokit = new Octokit({ auth: process.env.GITHUB_TOKEN }); export const fetchRepoIssues = tool( async ({ owner, repo, count = 5 }) => { try { console.log(`[SYSTEM]: Fetching live issues (excluding PRs) from ${owner}/${repo}...`); const { data } = await octokit.request("GET /repos/{owner}/{repo}/issues", { owner, repo, state: "open", per_page: count * 2, }); const onlyIssues = data .filter((item: any) => !item.pull_request) .slice(0, count); if (onlyIssues.length === 0) return `No open issues found in ${owner}/${repo}.`; const digest = onlyIssues.map((issue: any) => { return `Issue #${issue.number}: "${issue.title}" Author: ${issue.user?.login} Labels: ${issue.labels.map((l: any) => l.name).join(", ") || "None"} Body: ${issue.body?.substring(0, 300) || "No description provided"}...`; }).join("\n\n---\n\n"); return `Successfully fetched ${onlyIssues.length} issues from ${owner}/${repo}:\n\n${digest}`; } catch (error: any) { return `Error fetching GitHub issues: ${error.message}`; } }, { name: "fetch_repo_issues", description: "Fetches live open issues (excluding PRs) from a GitHub repository.", schema: { type: "object", properties: { owner: { type: "string", description: "The repo owner (e.g., 'facebook')" }, repo: { type: "string", description: "The repo name (e.g., 'react')" }, count: { type: "number", description: "Number of real issues to return" } }, required: ["owner", "repo"] } } ); You are using the Octokit library to authenticate with GitHub before actual issues from a repository can be fetched.Furthermore, you are filtering for truly open issues, which is achieved by looking at issues that don’t have an associated pull request.Additionally, you are trimming the issue body to 300 characters, which provides the agent with enough context about the issue while keeping LLM token usage limited and cost-effective.You are also creating the schema that the agent will leverage when calling the tool, so that the interaction between the agent and the tool can be more deterministic. Step 2: Build the Agent Now it is time to build the agent. Create a file named agent.ts inside the srcdirectory and add the code below to it: TypeScript import "dotenv/config"; import { ChatGoogleGenerativeAI } from "@langchain/google-genai"; import { createReactAgent } from "@langchain/langgraph/prebuilt"; import { fetchRepoIssues } from "./tools/github.js"; const model = new ChatGoogleGenerativeAI({ model: "gemini-3-flash-preview", temperature: 0, }); const SYSTEM_PROMPT = ` You are a Senior GitHub Maintainer. Your goal is to help users understand the status of their repositories. When asked about issues, use the fetch_repo_issues tool. Always provide a technical summary of the top issue you find. `; const agent = createReactAgent({ llm: model, tools: [fetchRepoIssues], messageModifier: SYSTEM_PROMPT, }); async function runAuditor() { console.log("--- Starting GitHub Auditor ---"); const response = await agent.invoke({ messages: [ { role: "user", content: "What are the latest issues in the facebook/react repo?" } ], }); const lastMessage = response.messages[response.messages.length - 1]; console.log("\nMaintainer's Report:"); console.log(lastMessage.content); } runAuditor(); This code uses the LangChainframework to create the agent. The first step is to tell the agent which LLM model to work with. The code above uses the gemini-3-flash-preview model, which is one of the fastest flagship models at the time of writing this. Additionally, you set the temperatureof the model, which tells the model how creative or deterministic it should be. Setting it to 0 makes the model more deterministic, which aligns well with the technical nature of the task this agent is going to perform. The next step is to create the brain of the agent, which is nothing but the promptthe agent will use to act. This prompt tells the agent what its role is, what tools it can use, and what constraints it should adhere to. Moreover, it needs to be told how the output should be formatted. Now that all the magical ingredients are ready, it's time to make the soup — the agent itself. You create the agent by invoking the createReactAgent function and passing it the model you selected, the tools the agent can use, and the system prompt as a messageModifier. By passing the tools array directly into createReactAgent, LangGraph automatically handles the conversion of your TypeScript tool definitions into the JSON schemas that the LLM expects. You learned about the ReAct loop of an AI agent earlier. The createReactAgent function is responsible for not only creating the agent but also managing this loop. This function acts as an orchestrator of the tasks the agent and LLM perform, handling state and schema management while also taking care of the termination logic. Note: Even though the tool file is named github.ts, you import it using the .js extension. This is a requirement of modern Node.js ECMAScript Modules With this, you are ready to start running your agent. Running the Agent On your terminal, start the agent with the npm startcommand. You should see output produced similar to the illustration given below. Image: Agent Response As you can see in the illustration above, the agent logs its internal actions, such as the tool call, highlighting that it intelligently understood the need for live data from GitHub. After retrieving the data, it parses and analyzes the information to create a structured report on the top issues, just like a Senior Maintainer. Conclusion In this article, you have learned what an AI agent is, how to build one, and, most importantly, how to use it in a real-world scenario. It's like having a co-worker assisting you with complex tasks. The entire code for the project you built here is available at this GitHub Link, but I highly recommend you follow along with the steps above rather than just cloning and running the project. Building it piece by piece is the best way to truly understand how the "brain" and "hands" of an agent work together.

By Anujkumarsinh Donvir DZone Core CORE
Building a Unified API Documentation Portal with React, Redoc, and Automatic RAML-to-OpenAPI Conversion
Building a Unified API Documentation Portal with React, Redoc, and Automatic RAML-to-OpenAPI Conversion

In today’s microservices-driven world (even with the evolution of AI), organizations often maintain dozens or even hundreds of APIs that are critical to building many software applications. These APIs may use different specification formats: some teams prefer OpenAPI 3.x for its widespread tooling support, whereas others maintain legacy RAML specifications that still power critical services. The challenge? Providing a unified, professional documentation experience without requiring teams to manually convert their specifications or maintain multiple documentation systems. In this article, I will walk you through building an API Documentation Portal that: Renders both OpenAPI 3.x and RAML 1.0 specificationsAutomatically converts RAML to OpenAPI at build timeProvides beautiful, interactive documentation using RedocDeploys as a static site to any CDN or cloud storage Let’s dive in. The Technology Stack Before we start coding, here is what we are working with: Technology Purpose React 18 UI framework TypeScript Type safety Vite Lightning-fast build tool Redoc API documentation rendering Tailwind CSS Styling webapi-parser RAML-to-OpenAPI conversion Why this stack? React and Vite provide a modern development experience with hot module replacement. Redoc provides a polished three-panel documentation layout that developers love. The webapi-parser handles the heavy lifting of format conversion. Project Architecture The portal follows a simple but effective architecture. Browser SidebarCategory A • API 1 • API 2Category B • API 3 → Redoc Viewer (Main Content)• API Info• Endpoints• Request/Response Schemas• Code Samples The key insight is that RAML files are converted to the OpenAPI format at build time rather than at runtime. This means: No server-side processing is requiredFaster page loadsSimpler deployment (pure static files) Setting Up the Project Step 1: Initialize the Project Shell npm create vite@latest apidoc -- --template react-ts cd apidoc npm install Step 2: Install Dependencies Core dependencies: Shell npm install react-router-dom redoc mobx styled-components Dev dependencies: Shell npm install -D tailwindcss postcss autoprefixer webapi-parser npx tailwindcss init -p Step 3: Define the API Configuration Type Create a type definition for the API registry: // src/types/api.ts TypeScript export interface ApiSpec { id: string; name: string; version: string; category: string; specPath: string; type: 'openapi' | 'raml'; description?: string; } export interface ApisConfig { apis: ApiSpec[]; } This interface enforces consistency across all API entries and enables TypeScript to detect configuration errors at compile time. Building the Core Components The API Configuration Store your API registry in a JSON file that is easy to update: // src/config/apis.json JSON { "apis": [ { "id": "petstore-api", "name": "Petstore API", "version": "1.0.0", "category": "Sample APIs", "specPath": "/specs/petstore.yaml", "type": "openapi", "description": "Sample OpenAPI 3.0 specification" }, { "id": "users-api", "name": "Users API", "version": "1.0.0", "category": "Sample APIs", "specPath": "/specs/users.raml", "type": "raml", "description": "Sample RAML 1.0 specification" } ] } The Redoc Wrapper Component Here is where the magic happens. The ApiDoc component wraps Redoc and handles the RAML-to-OpenAPI path transformation. // src/components/ApiDoc.tsx TypeScript import { RedocStandalone } from 'redoc'; import { ApiSpec } from '../types/api'; interface ApiDocProps { api: ApiSpec; } export function ApiDoc({ api }: ApiDocProps) { // For RAML files, use the converted OpenAPI version const specUrl = api.type === 'raml' ? api.specPath.replace(/\.raml$/, '.converted.yaml') : api.specPath; return ( <div className="h-full overflow-auto"> <RedocStandalone specUrl={specUrl} options={{ scrollYOffset: 0, hideDownloadButton: false, pathInMiddlePanel: true, theme: { colors: { primary: { main: '#3b82f6' }, }, sidebar: { backgroundColor: '#1f2937', textColor: '#f3f4f6', }, }, } /> </div> ); } The key line is the specUrl calculation: if the API type is raml, we swap the .raml extension for .converted.yaml. This assumes that our build process has already generated the converted file. The Navigation Sidebar A categorized sidebar makes it easy to navigate between APIs: // src/components/Sidebar.tsx TypeScript import { NavLink } from 'react-router-dom'; import { ApiSpec } from '../types/api'; interface SidebarProps { apis: ApiSpec[]; collapsed: boolean; onToggle: () => void; } export function Sidebar({ apis, collapsed, onToggle }: SidebarProps) { // Group APIs by category const groupedApis = apis.reduce((acc, api) => { if (!acc[api.category]) { acc[api.category] = []; } acc[api.category].push(api); return acc; }, {} as Record<string, ApiSpec[]>); return ( <aside className={`bg-gray-900 text-white ${collapsed ? 'w-16' : 'w-64'}`}> <div className="p-4 border-b border-gray-700"> {!collapsed && <h1 className="text-xl font-bold">API Docs</h1>} </div> <nav className="p-2"> {Object.entries(groupedApis).map(([category, categoryApis]) => ( <div key={category} className="mb-4"> {!collapsed && ( <h2 className="px-3 py-2 text-xs font-semibold text-gray-400 uppercase"> {category} </h2> )} <ul> {categoryApis.map((api) => ( <li key={api.id}> <NavLink to={`/docs/${api.id}`} className={({ isActive }) => `block px-3 py-2 rounded-md text-sm ${ isActive ? 'bg-blue-600' : 'hover:bg-gray-800' }` } > <div className="font-medium">{api.name}</div> <div className="text-xs text-gray-400"> v{api.version} • {api.type.toUpperCase()} </div> </NavLink> </li> ))} </ul> </div> ))} </nav> </aside> ); } The RAML Conversion Script This is the secret sauce that enables RAML support. The script runs at build time and converts all RAML files to OpenAPI 3.0. // scripts/convert-raml.js JavaScript import { readFileSync, writeFileSync, existsSync } from 'fs'; import { join, dirname } from 'path'; import { fileURLToPath } from 'url'; import wap from 'webapi-parser'; const __dirname = dirname(fileURLToPath(import.meta.url)); const rootDir = join(__dirname, '..'); // Read the APIs config const configPath = join(rootDir, 'src/config/apis.json'); const config = JSON.parse(readFileSync(configPath, 'utf-8')); // Find RAML APIs const ramlApis = config.apis.filter(api => api.type === 'raml'); async function convertRamlFiles() { const { WebApiParser } = wap; for (const api of ramlApis) { const ramlPath = join(rootDir, 'public', api.specPath); const outputPath = ramlPath.replace(/\.raml$/, '.converted.yaml'); if (!existsSync(ramlPath)) { console.warn(`RAML file not found: ${ramlPath}`); continue; } console.log(`Converting ${api.name}...`); try { const ramlContent = readFileSync(ramlPath, 'utf-8'); // Parse RAML 1.0 const model = await WebApiParser.raml10.parse(ramlContent); // Resolve all references const resolved = await WebApiParser.raml10.resolve(model); // Generate OpenAPI 3.0 const oas30 = await WebApiParser.oas30.generateString(resolved); writeFileSync(outputPath, oas30); console.log(` ✓ Converted to ${outputPath}`); } catch (error) { console.error(` ✗ Error: ${error.message}`); } } } convertRamlFiles(); Wire it into your build process: // package.json JSON { "scripts": { "convert-raml": "node scripts/convert-raml.js", "dev": "npm run convert-raml && vite", "build": "npm run convert-raml && tsc -b && vite build" } } Now, every time you run npm run dev or npm run build, the RAML files are converted automatically. Deployment with GitHub Actions To automate deployment to AWS S3, a GitHub Actions workflow can be created as follows (optional): # .github/workflows/deploy.yml YAML name: Build and Deploy on: push: branches: [main, master] pull_request: branches: [main, master] jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Setup Node.js uses: actions/setup-node@v4 with: node-version: '20' cache: 'npm' - name: Install dependencies run: npm ci - name: Build project run: npm run build - name: Upload artifacts uses: actions/upload-artifact@v4 with: name: dist path: dist/ deploy: needs: build runs-on: ubuntu-latest if: github.ref == 'refs/heads/main' || github.ref == 'refs/heads/master' steps: - name: Download artifacts uses: actions/download-artifact@v4 with: name: dist path: dist/ - name: Configure AWS credentials uses: aws-actions/configure-aws-credentials@v4 with: aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID } aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY } aws-region: ${{ vars.AWS_REGION } - name: Deploy to S3 run: aws s3 sync dist/ s3://${{ vars.S3_BUCKET } --delete Adding a New API Once the portal is set up, adding a new API is straightforward: Drop your spec file into public/specs/Add an entry to src/config/apis.jsonCommit and push — the CI/CD pipeline handles everything else JSON { "id": "my-new-api", "name": "My New API", "version": "2.0.0", "category": "Production", "specPath": "/specs/my-new-api.yaml", "type": "openapi", "description": "My awesome new API" } Final Outcome Once the application is deployed and hosted as a static site, the final outcome looks as shown below. Key Takeaways Unified Experience: Users see a consistent documentation interface regardless of whether the underlying specification is OpenAPI or RAML.Build-Time Conversion: Converting RAML to OpenAPI at build time eliminates runtime complexity and enables static hosting.Configuration-Driven: Adding new APIs requires only a JSON entry and a spec file — no code changes.Static Deployment: The entire portal is static HTML/JS/CSS, making it perfect for S3, CloudFront, Netlify, or any other static host.Developer Experience: Vite’s hot module replacement and TypeScript’s type checking make development fast and safe. Conclusion Building a unified API documentation portal does not have to be complicated. By combining React, Redoc, and automatic RAML conversion, you can deliver a professional documentation experience that scales with your organization’s API ecosystem. The complete source code is available under the MIT License. You can fork it, customize the theme, and make it your own. What challenges have you faced regarding API documentation? Feel free to share your thoughts in the comments section. References Official Documentation & Specifications OpenAPI Specification v3.2.0 https://spec.openapis.org/oas/v3.2.0.htmlOpenAPI Initiative https://www.openapis.org/OpenAPI Specification GitHub Repository https://github.com/OAI/OpenAPI-SpecificationRAML Official Website https://raml.org/RAML 1.0 Specification https://github.com/raml-org/raml-spec/blob/master/versions/raml-10/raml-10.mdRedoc GitHub Repository https://github.com/Redocly/redocRedoc Official Documentation https://redocly.com/docs/redocReact Official Documentation https://react.dev/Vite Official Documentation https://vite.dev/Tailwind CSS Documentation https://tailwindcss.com/docsGitHub Actions Documentation https://docs.github.com/en/actionswebapi-parser (npm) https://www.npmjs.com/package/webapi-parser Academic & Research Papers Meng, M., Steinhardt, S., & Schubert, A. (2019). "How Developers Use API Documentation: An Observation Study." Communication Design Quarterly, Vol 7, No. 2. ACM SIGDOC. https://dl.acm.org/doi/10.1145/3358931.3358937Henkel, M., & Keshishzadeh, S. (2020). "Optimizing API Documentation." ACM SIGDOC Annual International Conference on Design of Communication. https://dl.acm.org/doi/fullHtml/10.1145/3380851.3416759IEEE/ACM ICSE (2024). "Managing API Evolution in Microservice Architecture." 46th International Conference on Software Engineering. https://dl.acm.org/doi/10.1145/3639478.3639800Journal of Systems and Software (2024). "Microservice API Evolution in Practice: A Study on Strategies and Challenges." Elsevier. https://www.sciencedirect.com/science/article/pii/S0164121224001559Di Francesco, P., Malavolta, I., & Lago, P. (2021). "On Microservice Analysis and Architecture Evolution: A Systematic Mapping Study." MDPI Applied Sciences. https://www.mdpi.com/2076-3417/11/17/7856Heinrich, R., et al. (2017). "Performance Engineering for Microservices: Research Challenges and Directions." ACM/SPEC International Conference on Performance Engineering. https://dl.acm.org/doi/10.1145/3053600.3053653 Industry Reports & Best Practices SmartBear. "State of Software Quality - API Report 2023." https://smartbear.com/state-of-software-quality/api/Postman. "API Documentation: How to Write, Examples & Best Practices." https://www.postman.com/api-platform/api-documentation/Swagger/SmartBear. "API Documentation: The Secret to a Great API Developer Experience." https://swagger.io/resources/ebooks/api-documentation-the-secret-to-a-great-api-developer-experience/Pronovix. "Developer Experience Best Practices - API The Docs 2023." https://pronovix.com/articles/developer-experience-best-practices-api-docs-2023 Key Statistics 64% of developers express frustration when faced with poor API documentation resourcesDocumentation with interactive components reduces support queries by up to 30%Developers experienced a 40% decrease in onboarding time following documentation improvementsSupport requests fell by an average of 35% with improved documentationSatisfaction scores improved from 60% to over 85% after documentation enhancements

By Sreedhar Pamidiparthi
Infrastructure as Code Is Not Enough
Infrastructure as Code Is Not Enough

When Infrastructure as Code Stops Solving the Problem Infrastructure as Code changed the industry for the better. For the first time, infrastructure could be reviewed, versioned, and deployed with the same discipline as application code. Teams moved faster, environments became more consistent, and manual mistakes dropped dramatically. But as systems grew larger and more dynamic, many teams started to notice something uncomfortable. Even with well-written Terraform or CloudFormation, production incidents did not disappear. Upgrades were still risky. Latency problems still required late-night intervention. Security drift still showed up months after deployment. The issue was not poor implementation. The issue was that Infrastructure as Code was never designed to operate systems continuously. It was designed to create them. Why Static Definitions Struggle in Living Systems Infrastructure as Code was built for systems that behave predictably. You define resources, apply the configuration, and expect the system to remain close to that state. Modern infrastructure does not work that way. Cloud native platforms change constantly. Traffic shifts by the minute. Pods restart. Nodes appear and disappear. Dependencies slow down without ever fully failing. These behaviors are normal, not exceptional. Static definitions describe what infrastructure should look like, but they do not understand what is happening right now. A Terraform file can define capacity, but it cannot tell whether that capacity is sufficient under current conditions. A Kubernetes manifest can set limits, but it cannot detect slow performance that quietly degrades user experience. Many teams try to bridge this gap with simple automation. Python if cpu_usage > 80: scale_up() This works until it does not. Latency may increase even when CPU is low. Scaling may make things worse if the real issue is a downstream dependency or a bad deployment. Living systems need decisions based on context, not fixed thresholds. Python if latency > intent.max_latency: if recent_deploy_detected(): rollback() else: rebalance_traffic() This approach evaluates the system against what it is meant to deliver, not just raw metrics. It chooses actions that protect user experience instead of blindly reacting to signals. The problem with static definitions is not that they are wrong. It is that they freeze intent at deployment time while the system continues to evolve. Without continuous evaluation and correction, configuration and reality drift apart. As systems become more dynamic, that gap grows faster. Infrastructure teams are not struggling because they lack discipline. They are struggling because static tools are being asked to manage living systems. The Shift From Configuration to Behavior For a long time, infrastructure work was about configuration. Teams focused on getting settings right, choosing the correct instance sizes, and defining how many replicas should run. If the configuration matched the template, the system was considered healthy. That mindset breaks down in modern environments. A system can be perfectly configured and still behave poorly. Latency can rise, dependencies can slow down, and user experience can degrade without anything being technically misconfigured. Behavior focuses on what the system actually does in production, not how it was set up. It shifts attention from static settings to real outcomes like availability, response time, and error rates. Instead of asking whether infrastructure matches a definition, teams ask whether the system is behaving the way users expect. This shift changes how automation works. Traditional automation reacts to individual signals. Python if cpu_usage > 80: scale_up() Behavior-driven systems look at context and outcomes before acting. Python if latency > intent.max_latency: protect_user_experience() The difference is subtle but important. The first example reacts to a metric. The second reacts to user impact. The system is no longer following instructions blindly. It is making decisions based on behavior. This approach also reduces unnecessary action. Not every spike requires scaling. Not every anomaly requires rollback. By focusing on behavior, platforms act only when user-facing goals are at risk. The shift from configuration to behavior is what allows infrastructure to move from being automated to being adaptive. It is the foundation for platforms that can respond intelligently to change instead of reacting mechanically to symptoms. Policy as Code Keeps Systems Safe After Deployment Infrastructure as Code creates resources, but it does not control how those resources behave over time. Policy as Code fills that gap by enforcing safety rules continuously, not just at deployment. In Kubernetes, policies commonly run at admission time and block unsafe configurations before they reach the cluster. Require resource limits on all pods. YAML apiVersion: constraints.gatekeeper.sh/v1beta1 kind: K8sRequiredResources metadata: name: require-resource-limits spec: match: kinds: - apiGroups: [""] kinds: ["Pod"] This prevents a single workload from exhausting cluster resources. Block privileged containers. YAML apiVersion: constraints.gatekeeper.sh/v1beta1 kind: K8sPSPPrivilegedContainer metadata: name: disallow-privileged spec: parameters: privileged: false This stops insecure workloads before they run. Policies can also include operational context. Freeze deployments during peak hours. YAML if is_peak_hours(): deny_change("deployment blocked during peak traffic") Policies do not replace engineering judgment. They encode it. Once defined, they run everywhere, all the time, with no manual review required. This is what keeps systems safe after deployment. The platform enforces the rules so teams can move fast without breaking production. Intent Changes the Conversation Completely Traditional infrastructure conversations revolve around configuration. How many replicas should we run? What instance size should we use? Which threshold should trigger scaling? These questions assume that decisions made upfront will remain correct in production. Intent changes that assumption. Instead of prescribing how the system should operate, intent defines what the system must protect. Availability, latency, and error tolerance become the source of truth. The platform is free to change its behavior as long as those goals are preserved. Below is a simplified intent-driven control loop that shows how modern platforms reason about decisions. Python intent = { "availability": 99.9, "max_latency_ms": 200, "max_error_rate": 0.5, "recovery": { "rollback_on_regression": True, "allow_autoscale": True } } def evaluate_system(metrics, context, intent): if metrics["latency_ms"] > intent["max_latency_ms"]: if context["recent_deploy"] and intent["recovery"]["rollback_on_regression"]: return "ROLLBACK" if intent["recovery"]["allow_autoscale"]: return "SCALE_OR_REBALANCE" return "ESCALATE" if metrics["error_rate"] > intent["max_error_rate"]: return "ROLLBACK" if metrics["availability"] < intent["availability"]: return "HEAL" return "NO_ACTION" action = evaluate_system(runtime_metrics, runtime_context, intent) execute(action) This code does not react to a single signal. It evaluates behavior against declared intent and chooses the safest action based on context. Latency does not automatically trigger scaling. A recent deployment may trigger rollback instead. If neither action is safe, the system escalates. Intent also influences decisions before changes reach production. During admission or deployment validation, the platform can predict whether a change threatens declared goals and block it early. Python if predicted_latency_after_change > intent["max_latency_ms"]: deny_change("deployment violates service intent") This is where the conversation fundamentally changes. Teams stop arguing about which metric matters most in the moment. The system already knows. Engineers define intent once, and the platform enforces it continuously. Intent does not remove human judgment. It preserves it. The difference is that the judgment is applied consistently, instantly, and at machine speed. When intent drives decisions, infrastructure stops reacting to symptoms and starts protecting outcomes. That is what allows platforms to move from automated to truly adaptive behavior. Feedback Loops Turn Observability Into Action Most infrastructure teams already have good observability. They collect metrics, logs, and traces and can see what is happening across their systems. The challenge is that seeing a problem does not automatically resolve it. Feedback loops are what turn observability into action. They continuously compare what the system is doing with what it is supposed to do. When those two drift apart, the system responds on its own instead of waiting for a human to intervene. This changes how reliability works in practice. Instead of reacting to alerts after users are affected, the platform looks for early signs of degradation and corrects them quietly. Small latency increases, rising error rates, or unusual behavior trigger adjustment before they escalate into incidents. Feedback loops also bring consistency. The same decisions are made every time, without fatigue or guesswork. Humans step in only when the system cannot safely correct itself. This reduces noise while improving stability. With feedback loops in place, dashboards stop being passive displays and become part of a living system. Observability is no longer just about knowing what went wrong. It becomes a continuous mechanism for keeping the system healthy. This is how infrastructure moves from alert-driven operations to self-correcting behavior, and why feedback loops are essential for modern platforms. The Difference Between Automation and Autonomy Automation and autonomy are often used interchangeably, but they are not the same thing. The difference becomes obvious the moment systems stop behaving as expected. Automation follows instructions. It reacts to predefined conditions and executes predefined actions. This works well when failures are simple and predictable, but it breaks down when symptoms do not clearly point to a cause. Python if cpu_usage > 80: scale_up() This kind of automation is fast, but it is blind. It does not know why CPU is high, whether scaling will help, or whether a recent deployment caused the problem. Autonomy introduces judgment. An autonomous system evaluates context, understands intent, and chooses the safest action among several options. Instead of reacting to a single signal, it reasons about system behavior. Python def decide_action(metrics, context, intent): if metrics["latency"] > intent["max_latency"]: if context["recent_deploy"]: return "ROLLBACK" if metrics["cpu"] > 80: return "SCALE" return "REBALANCE" if metrics["error_rate"] > intent["max_error_rate"]: return "ROLLBACK" return "NO_ACTION" action = decide_action(runtime_metrics, runtime_context, service_intent) execute(action) This system does not assume one correct response. It asks what the system is trying to protect and acts accordingly. The same symptom can lead to different actions depending on context. This distinction matters in real production environments. Automated systems can make problems worse by repeatedly applying the wrong fix. Autonomous systems adapt. They can pause, reverse, or escalate instead of blindly continuing. Autonomy also changes the role of humans. Engineers are no longer responsible for executing recovery steps under pressure. Their role is to define intent, design guardrails, and improve decision logic over time. Automation reduces effort. Autonomy reduces risk. As systems grow more complex, the ability to reason and adapt becomes more valuable than the ability to react quickly. That is why modern platforms are moving beyond automation and toward autonomy. A Kubernetes Story Everyone Recognizes Imagine a production Kubernetes cluster during a traffic surge. Latency slowly increases. Error rates remain low, so nothing crashes. Alerts start firing, but there is no obvious failure. In a traditional setup, an on-call engineer investigates, correlates dashboards, and makes a judgment call under pressure. In a platform built around policy, intent, and feedback, the system notices the latency breach, validates which actions are safe, scales resources, and pauses risky deployments automatically. By the time anyone looks at a dashboard, the system has already corrected itself. Why This Shift Matters Now Infrastructure teams are operating in a very different world than they were just a few years ago. Modern systems are highly distributed, constantly changing, and deeply interconnected. A single user request often crosses dozens of services, clusters, and dependencies. When performance degrades, there is rarely one clear failure to fix. Change has also become continuous. Deployments happen daily or even hourly. Infrastructure scales automatically. Dependencies evolve independently. In this environment, relying on humans to manually validate every change or respond to every signal simply does not scale. The risk is not always a hard outage. Small latency increases, quiet configuration drift, and subtle security gaps can damage user experience long before an alert fires. Infrastructure as Code assumes correctness is established at deployment time. Modern systems require correctness to be maintained all the time. Policy, intent, and feedback loops address this gap. They allow infrastructure to adapt to real conditions, enforce safe behavior, and correct itself before issues escalate. This shift matters now because complexity is already outpacing human response, and the teams that succeed are the ones building platforms that can handle change on their own. The New Question Infrastructure Teams Must Ask For years, infrastructure teams measured success by one primary question. Did we deploy it correctly? If the configuration matched the template and the pipeline turned green, the job was considered done. That question made sense when systems were smaller, and change was rare. Today, it falls apart the moment production traffic behaves differently than expected or a dependency fails in a non-obvious way. In modern environments, correctness is not a moment in time. It is something that must be maintained continuously. The more important question now is much simpler and much harder. Can this system keep itself correct when conditions change? Keeping a system correct means more than staying up. It means continuing to meet user expectations even as load increases, deployments happen, nodes fail, and networks misbehave. It means understanding the difference between a harmless fluctuation and a real threat to reliability, and responding appropriately without waiting for human judgment. This is where intent, policy, and feedback quietly work together. Intent defines what success looks like in human terms. Policy defines the boundaries that should never be crossed. Feedback shows whether reality is drifting away from either. When these three are connected, infrastructure stops reacting blindly and starts making informed decisions. A simple example makes this clear. Instead of hard-coding how infrastructure should scale, teams can express what they actually care about. Availability, latency, and error rates become first-class inputs to the system. YAML serviceIntent: availabilityTarget: 99.9 maxLatencyMs: 250 autoRecovery: rollbackOnDegradation: true This intent does not say how many replicas to run or which nodes to use. It describes the outcome the system must protect. The platform continuously compares real metrics against this intent and decides how to respond. Policies then ensure that whatever action the system takes is safe. YAML policy: allowAutoScale: true blockChangesDuringPeakHours: true requireEncryptedTraffic: true Now the system has both a goal and a set of guardrails. When latency increases, it can scale or rebalance workloads. When a deployment pushes latency beyond acceptable limits, it can automatically roll back. When conditions are risky, it can pause further changes entirely. Feedback loops close the loop by validating that actions actually worked. YAML if latency > intent.maxLatencyMs: if policy.allowAutoScale: scale_cluster() elif intent.autoRecovery.rollbackOnDegradation: rollback_last_deploy() This logic is simple on purpose. The power does not come from complex algorithms. It comes from continuously asking whether the system is still meeting its intent and acting when it is not. In this model, humans are no longer the primary decision makers during incidents. They become designers of intent and policy. Instead of writing runbooks for every possible failure, teams teach the system how to judge situations on its own. This shift also changes how success is measured. Fewer alerts are not the goal. Faster deployments are not the goal. The real measure of success is how often the system corrects itself before anyone notices there was a problem at all. Infrastructure teams that adopt this mindset stop asking whether something was deployed correctly and start asking whether the platform can protect users under pressure. That question leads to different architectures, different tooling choices, and ultimately more resilient systems. In modern environments, failure is inevitable. Human intervention does not have to be. The teams that thrive are the ones building infrastructure that can observe itself, reason about its own health, and take action long before a pager ever goes off. That is the new question infrastructure teams must ask, and it is quietly redefining what reliability really means. Conclusion: Infrastructure That Can Take Care of Itself Infrastructure has changed, even if many of the tools have not. Systems are larger, more dynamic, and more interconnected than they were when Infrastructure as Code first became popular. In that world, simply defining resources correctly is no longer enough to keep systems reliable. What separates resilient platforms from fragile ones is not how fast they can deploy, but how well they can protect themselves once deployed. Policy gives infrastructure boundaries. Intent gives it purpose. Feedback loops give it awareness. When these elements work together, infrastructure stops being something that engineers constantly chase and starts becoming something that quietly takes care of itself. This shift does not remove humans from the loop. It puts them where they add the most value. Instead of reacting to alerts and manually stitching together recovery steps, teams focus on designing better systems, clearer intent, and stronger guardrails. Reliability becomes a property of the platform, not a burden on the people running it. Infrastructure as Code was an essential milestone. But it was never the destination. The next generation of infrastructure is defined by systems that can observe, decide, and act on their own. The real question for modern teams is no longer whether infrastructure can be deployed correctly, but whether it can remain correct when the unexpected inevitably happens. That is where modern infrastructure is headed, and it is where the most resilient platforms are already operating today.

By Venkatesan Thirumalai
A Guide to Parallax and Scroll-Based Animations
A Guide to Parallax and Scroll-Based Animations

Parallax animation can transform static web pages into immersive, interactive experiences. While traditional parallax relies on simple background image movement and tons of JavaScript code, scroll-based CSS animation opens up a world of creative possibilities with no JavaScript at all. In this guide, we’ll explore two distinct approaches: SVG block animation: Creating movement using SVG graphics for unique, customizable effects.Multi-image parallax background: Stacking and animating multiple image layers for a classic parallax illusion. We'll walk through each technique step by step, compare their strengths and limitations, and offer practical tips for responsive design. Part 1: SVG Block Animation Scalable vector graphics (SVGs) are perfect for sharp, resolution-independent visuals. With CSS and JavaScript, you can animate SVG elements in response to user scrolling, creating effects like walking figures, moving clouds, or shifting landscapes. Step 1: Create Your Scene Start by designing SVG. For our example, I picked the stock layered SVG and cut it into the layers, creating single-layer SVGs. Let's set up the scene first. Place layers and create static CSS.Codepen - static content setup. HTML <section class="content"> <div class="layout-container layout-block"> <p> some content there </p> </div> </section> <section class="hero"> <div class="hero-block"> <div class="hero-background"> <div class="layer layer1"> svg1 there </div> <div class="layer layer2"> svg2 there </div> <div class="layer layer3"> svg3 there </div> <div class="layer layer4"> svg4 there </div> <div class="layer layer5"> svg5 there </div> </div> <div class="hero-content"> <span class="emoji">✅</span> <h1>markup SVG</h1> <h2>images animation</h2> </div> </div> </section> <section> <div class="layout-container layout-block"> <p>some more content</p> </div> </section> Tip: Use SVG editors or design tools like Figma to create more complex shapes. As I had quite big SVGs, I moved their declarations to a separate file. In real life, you could use the assets folder instead. Step 2: Style the Container and SVG CSS .layout-block { position: relative; height: 50vh; min-height: 300px; } .hero-block { position: relative; height: 100vh; min-height: 400px; max-height: 70vw; } .hero { position: relative; background-color: white; animation: parallax linear; } .hero-background { top:0; left:0; width: 100%; height: 100%; } .hero-content { text-align: center; position: absolute; width: 100%; top: 25%; } .layer { display: flex; z-index: 0; position: absolute; height: 100%; width: 100%; bottom: 0; left: 0; } Important: Avoid overflow: hidden on the container, as it can cut off animated SVG elements when they move outside the bounds. Step 3: Add Scroll-Based Animation Now, let’s move the SVG blocks horizontally as the user scrolls. Our goal looks approximately like this: The old-fashioned way would look like the code below: JavaScript // DON'T DO THIS ❌ window.addEventListener('scroll', () => { const scrollY = window.scrollY; document.querySelector('.block').style.transform = `translateX(${scrollY * 0.5}px)`; }); Now we're going to do the magic using CSS scroll-driven animation, which provides two new functions view() and scroll(). Let’s add an animation to the background pattern within each hero section to modify the background position using scroll() (see documentation). CSS .hero { animation: parallax linear; animation-timeline: scroll(); } @keyframes parallax { from { top: 0; } to { top: -40%; } } Add an animation layer by moving the title down with view() (documentation). CSS .hero-content { top: 25%; animation: float-25-50 linear; animation-timeline: view(-100px); } @keyframes float-0-25 { from { top: 0; } to { top: 25%; } } Actually, there we can stop, and it will cover most cases for the parallax background. But let's step a bit further and play with the animation of layers. CSS .layer1 { opacity: 0.6; animation: parallax linear; animation-timeline: view(); animation-range: 40vh 120%; } .layer2 { animation: parallax2 linear; animation-timeline: view(); animation-range: 60vh 100%; } .layer3 { animation: parallax-bottom linear; animation-timeline: view(); animation-range: 60vh 100%; } .layer4 { animation: float-right; animation-timeline: view(); animation-range: 50vh 100%; /* will not work as max-width is set for parent*/ } .layer5 { animation: float-left; animation-timeline: view(); animation-range: 50vh 100%; } @keyframes parallax { from { top: 0; } to { top: -40%; } } @keyframes parallax2 { from { top: 0; } to { top: -20%; } } @keyframes parallax-bottom { from { top: 0; } to { top: 30%; } } @keyframes float-left { 0% { left: 0; } 100% { left: -40%; } } @keyframes float-right { 0% { left: 0; } 100% { left: 40%; } } Add a rotate animation for the icon. CSS .emoji { z-index: 0; display: inline-block; font-size: 50px; animation: rotate linear, orbit-out ease; animation-timeline: view(); animation-range: 0% 80%, 80% 110%; } @keyframes rotate { 0% { transform: rotate(200deg); } 100% { transform: rotate(0deg); } } Add some extra styles. Play a bit. Here we are! Check out the live demo on CodePen. Step 4: Make It Responsive SVGs can scale, but their container needs responsive handling: Use width: 100vw; or max-width: 100%; for the container.Adjust viewBox and SVG dimensions.Use @media queries to tweak height or layout on different devices. Step 5: Limitations Overflow: If you need content clipped, consider a different animation method, as SVG transform animations are restricted by overflow: hidden. Repeatability: SVGs can’t be easily repeated as backgrounds like raster images.Performance: SVGs are efficient for simple shapes, but complex scenes may slow down rendering, especially on mobile. Part 2: Multi-Image Parallax Background There I decided to experiment wth background images. The idea is the same. Create a multilayer structure, and move the background in different directions to create a "walk" effect. Step 1: Prepare Your Layers Create separate images for each depth layer (background, midground, foreground). PNGs with transparency work well. HTML <section class="hero bg-night"> <div class="hero-block"> <div class="hero-background"> <div class="bg bg1"> </div> <div class="bg bg2"> </div> <div class="bg bg3"> </div> <div class="bg bg4"> </div> </div> <div class="hero-content"> <span class="emoji">✅</span> <h1>Background </h1> <h2>images animation</h2> </div> </div> </section> Step 2: Style the Layers CSS .bg { display: flex; z-index: 0; position: absolute; height: 100%; width: 100%; bottom: 0; left: 0; background-position: 0; } .bg1 { opacity: 0.6; background-image: url("https://raw.githubusercontent.com/h-labushkina/ccs-parallax-animation/609b4b41529f3d2aaf3d7e8be223e7376f793b23/svg/1.svg"); } .bg2 { background-image: url("https://raw.githubusercontent.com/h-labushkina/ccs-parallax-animation/609b4b41529f3d2aaf3d7e8be223e7376f793b23/svg/2.svg"); } .bg3 { background-image: url("https://raw.githubusercontent.com/h-labushkina/ccs-parallax-animation/609b4b41529f3d2aaf3d7e8be223e7376f793b23/svg/3.svg"); } .bg4 { background-image: url("https://raw.githubusercontent.com/h-labushkina/ccs-parallax-animation/609b4b41529f3d2aaf3d7e8be223e7376f793b23/svg/5.svg"); } Step 3: Add Animation Now let's make it alive! There we use animation-range to control animation speed from one side and for behavior, so mountains on the back start moving first, then in 20 more vh we add the next layer. Play with this range to make your animation perfect. CSS .bg1 { animation: parallax-bg linear; /* moves top */ animation-timeline: view(); animation-range: 40vh 120%; } .bg2 { animation: parallax2-bg linear; /* moves top */ animation-timeline: view(); animation-range: 60vh 100%; } .bg3 { animation: parallax-bottom-bg linear; animation-timeline: view(); animation-range: 60vh 100%; } .bg4 { animation: float-left-bg; /* moves left */ animation-timeline: view(); animation-range: 20vh 120%; } @keyframes parallax-bg { from { background-position: 0; } to { background-position: 0 100%; } } @keyframes parallax2-bg { from { background-position: 0; } to { background-position: 0 70%; } } @keyframes parallax-bottom-bg { from { background-position: 0; } to { background-position: 0 -10%; } } @keyframes float-left-bg { 0% { background-position: 0; } 100% { background-position: 60% 0; } } Each layer moves at a different rate, creating depth. Step 4: Responsive Design Use @media queries to adjust the background size for smaller screens. Don't use background-size: cover; as it will prevent vertical animation. CSS @media screen and (min-width: 1024px) and (max-width: 2024px) { .bg { background-size: 2024px; } } @media screen and (min-width: 768px) and (max-width: 1024px) { .bg { background-size: 1024px; } } @media screen and (max-width: 768px) { .bg { background-size: 70vh 70vh; } Step 5: Troubleshooting and Limitations Image sizing: Large images can slow down loading. Compress and optimize all assets.Repeating: Image backgrounds can be repeated if desired, with background-repeat: repeat;.Responsiveness: Different screen ratios might crop or stretch images; test thoroughly.Don't use background-size: cover; as it will prevent vertical animation. Comparison: SVG vs. Multi-Image Parallax FeatureSVG Block AnimationMulti-Image BackgroundCustom VisualsHigh (any shape, path, or style)Limited to static imagesRepeatabilityNot supportedSupported with background-repeatOverflow HandlingLimited (no overflow: hidden)Not an issueResponsivenessRequires careful container scalingNeeds media queriesPerformanceGreat for simple SVGsDepends on image size/countAnimation ControlFine-grained (per element)Layer-basedBrowser SupportExcellentExcellent Limited availability. Browseranimation-timeline / Scroll AnimationsChrome✅ (from 115)Edge✅ (from 115)Safari✅ (from 17.4)Firefox❌ Final Thoughts We’ve just taken a look at how the new CSS view() and scroll() functions work with animation-timeline to bring scroll-based animations to life — all without needing JavaScript. Instead of writing event listeners and math to track scroll position, you can now describe these effects right in your CSS, making things simpler and cleaner. The example we built shows how easy it is to get smooth, responsive animations that react as you scroll down the page. It’s a great way to keep your code tidy and take advantage of what modern browsers can do. If you’re ready to try out some scroll magic in your projects, definitely give these new CSS features a shot. Explore further: GitHub Source CodeLive Demo on CodePen

By Hanna Labushkina
Playwright Fixtures vs. Lazy Approach
Playwright Fixtures vs. Lazy Approach

When building scalable test automation frameworks, how you create and manage objects (pages, services, helpers) matters as much as the tests themselves. Two commonly used patterns are the Fixture Approach and the Lazy Approach. Each has its own strengths — and choosing the right one can significantly impact performance, readability, and maintainability. In this blog, we take a deep dive into the Fixture Approach and the Lazy Approach, helping you understand when and why to use each one. Fixture vs. Lazy Approach in Test Automation The Fixture Approach and the Lazy Approach represent two different ways of managing object creation in test automation. In the Fixture Approach, all required objects are created upfront before the test starts, even if some of them are never used. This can simplify setup but often leads to unnecessary resource usage and slower execution. The Lazy Approach, on the other hand, creates objects only when they are actually needed during test execution. This results in better performance, reduced memory usage, and a cleaner, more scalable test design — making it the preferred approach for large automation suites. Key Concept Eager (Fixture) loading: Create all objects immediatelyLazy loading: Create objects on demand (recommended) Simple Analogy Think of it like a restaurant: Eager (Fixture): The chef prepares all menu items when the restaurant opensLazy: The chef prepares dishes only when customers order them Fixture Implementation In the Fixture Approach, all required objects are created upfront before the test starts. These objects are injected into the test via fixtures and are available throughout the test lifecycle. JavaScript const { test: base } = require("@playwright/test"); const test = base.extend({ sitePage: async ({ page }, use) => { await use(new SitePage(page)); }, commonPage: async ({ page }, use) => { await use(new CommonPage(page)); }, adminEnvVars: async ({ page }, use) => { await use(new AdminEnvVarsPage(page)); } } Problems in the Fixture Approach All page objects are created upfrontHigher memory usage In your test you will see all objects are created upfront... In the code snippet below, commonPage, adminEnvVars, and sitePage are page objects. Even if you use only commonPage, the others are still created. JavaScript test("TC0023", async ({ commonPage, adminEnvVars, sitePage }) => { // ALL 3 objects already created before test runs // Even if you only use commonPage, others still created! }); Fixture Approach Flow All objects are created upfront, and memory is wasted on unused objects. JavaScript Test starts ↓ commonPage created ✓ ↓ adminEnvVars created ✓ ↓ sitePage created ✓ ↓ Test code runs (uses only commonPage) ↓ Test ends ↓ Memory wasted on unused objects !!!! NOTE : Here adminEnvVars ,sitePage objects are unused objects !!!! Lazy Implementation (On-Demand Object Creation) Objects are created only when they are used. Below is the folder structure for the Lazy Implementation. Creating the Pages Let's create the Login and Home Pages. LoginPage.ts JavaScript import { Page, expect } from '@playwright/test'; export class LoginPage { constructor(private page: Page) {} async goToLoginPage(baseURL: string) { await this.page.goto(`${baseURL}/login`); } async doLogin(username: string, password: string) { await this.page.fill('#username', username); await this.page.fill('#password', password); await this.page.click('#loginBtn'); } } HomePage.ts JavaScript import { Page, expect } from '@playwright/test'; export class HomePage { constructor(private page: Page) {} async isUserLoggedIn() { await expect(this.page.locator('#logoutBtn')).toBeVisible(); } } Lazy Implementation: The PageObjects Factory The PageObjects class is the key to the Lazy Approach. It creates page objects only when requested. PageObjects.ts JavaScript import { Page } from '@playwright/test'; import { LoginPage } from '../pages/LoginPage'; import { HomePage } from '../pages/HomePage'; export class PageObjects { constructor(private page: Page) {} createLoginPage(): LoginPage { return new LoginPage(this.page); } createHomePage(): HomePage { return new HomePage(this.page); } } Test Using the Lazy Approach This test demonstrates how the Lazy Approach works in a Playwright automation framework — page objects are created only when needed, not upfront. login.spec.ts JavaScript import { test } from '@playwright/test'; import { PageObjects } from '../pageObjects/PageObjects'; test('login test with Lar Pattern', async ({ page, baseURL }) => { const pageObjects = new PageObjects(page); const loginPage = pageObjects.createLoginPage(); const homePage = pageObjects.createHomePage(); await loginPage.goToLoginPage(baseURL); await loginPage.doLogin('abc', 'kailash@23'); await homePage.isUserLoggedIn(); }); Flow of Lazy Implementation JavaScript Test starts ↓ Test code runs ↓ Access commonPage → Created on-demand ✓ ↓ Access adminEnvVars → Created on-demand ✓ ↓ Test ends ↓ sitePage was never used → Never created ✓ (Memory saved!) Benefits: Fixture vs. Lazy Fixture Approach Creates all page objects upfrontEven unused onesSlower startupHarder to scale Lazy Approach Creates only required pagesFaster executionCleaner testsScales easily to 30–40+ pages Conclusion When your test setup involves only a small number of objects, using fixtures is a straightforward and effective approach. Fixtures provide a structured way to initialize and manage these objects, making the test setup predictable, readable, and easy to maintain. However, as the application grows and the number of objects increases — typically 30–40 or more — eagerly creating all objects through fixtures can lead to unnecessary initialization, increased memory usage, and slower test execution. In such cases, the Lazy Approach is more suitable.

By Kailash Pathak DZone Core CORE
String to Unicode Converter Utility
String to Unicode Converter Utility

This is a technical article for Java developers. It describes a Java utility that can convert strings to Unicode sequences and back. There are many websites and other services that allow various text conversions. This utility allows you to do the conversion in Java code. It allows converting any string into a String containing a Unicode sequence that represents characters from the original string. The utility can do backwards conversion as well — convert a Unicode sequence String into a textual String. Just to show an example, a String "Hello World" can be converted into "\u0048\u0065\u006c\u006c\u006f\u0020\u0057\u006f\u0072\u006c\u0064". What Is This Utility Needed For? There are several use cases where such a utility would be needed. I used it multiple times to diagnose and debug very tough encoding problems when text appears garbled, and you need to understand the root cause or what the text is supposed to be.Another case is when your application needs to send info to systems that do not support certain encodings, or it is not known which encodings they support. This way, if the info is sent as a Unicode sequence, there are no encoding issues, and the codes are universal. As such, they are good for different systems interoperability.Property files: Sometimes, when your application reads some properties from files and some text values are in languages that use non-Latin scripts (such as Hebrew, Arabic, Chinese, Japanese Russian and many others) and those files need to be distributed to different systems, you might want to convert your non-Latin text values into Unicode sequences to avoid a danger of file being saved in incorrect encoding. But then your application that reads the properties needs to convert them back into the original text. How to Use the Utility The utility is provided as part of an open-source Java library called MgntUtils. It is available on Maven Central and GitHub (including source code and Javadoc). Here is a direct link to Javadoc. The solution implementation is provided in the class StringUnicodeEncoderDecoder. So, the MgntUtils library needs to be included in your project, and then the usage example may look like this: Java String testStr = "Hello World"; String encodedStr = StringUnicodeEncoderDecoder.encodeStringToUnicodeSequence(testStr); System.out.println(encodedStr); String restoredStr = StringUnicodeEncoderDecoder.decodeUnicodeSequenceToString(encodedStr); System.out.println(restoredStr); The output of this code snippet would be: Plain Text \u0048\u0065\u006c\u006c\u006f\u0020\u0057\u006f\u0072\u006c\u0064 Hello World A Little More About Encoding Issues Most encoding issues arise when non-Latin scripts are used, or when text contains portions in different languages with a mix of Latin and non-Latin scripts. So, sometimes when a text (or parts of it) is displayed as question marks or as garbled, the main question should be if this is a display error related to encoding, or the data itself is corrupted or lost. So, in this case, it makes sense to convert the problematic String to a Unicode sequence (and sometimes backwards to see if it fixes the problem). In Unicode sequences, it is easy to determine whether the data is actually there behind the garbled display. If the issue is just an incorrect display and the actual data is not lost or corrupted, you can also determine in what language the original data was by the first 2 digits of \uXXXX character code and what actual character it was supposed to be (by the last 2 digits). Limitations and Trade-Offs Long Strings Problem Note that when a String is converted to Unicode sequences, each character is replaced with 6 characters. For example, converting a String "H" will result in String "u\0048". So, if a long String is converted, the result will be 6 times longer. In addition, the primary value of this conversion is that it allows you to map each character to its Unicode. So, when you work with String up to 10 to 20 characters or so, everything is great. But think about converting a String containing 500-character-long text, and you need to find a Unicode mapping for your 300th character! This is just not practical. Except when you need that text converted to be passed to another system, not for human analysis. But even in this case, be aware of the significant increase in String size. Converting Strings That Already Contain Unicode Sequences This utility does not recognize Unicode sequences when converting a String to them. Here is a short example: Converting a String "H" will result in String "u\0048". However, if you now convert the String "u\0048" again into Unicode sequences, the result will be "\u005c\u0075\u0030\u0030\u0034\u0038". This is because the utility will take each of the 6 characters from the String "u\0048" one by one and will convert each one into its Unicode sequence. (Don’t confuse it with the method decoding Unicode Sequences — StringUnicodeEncoderDecoder.decodeUnicodeSequenceToString() that converts a Unicode sequence String back to the original String. That method, of course, will convert String "u\0048" back into String "H".) Conclusion I have used this utility very effectively for testing thorny encoding issues, and others have used it for this purpose, notably in projects that work with non-Latin languages. So, it is battle-tested. The MgntUtils library is lightweight and very easy to integrate. Give it a try. I hope it will save you some headaches.

By Michael Gantman
Building a 300 Channel Video Encoding Server
Building a 300 Channel Video Encoding Server

Snapshot Organization: NETINT, Supermicro, and Ampere® Computing Problem: The demand for high-quality live video streaming has surged, putting pressure on operational costs and user expectations. Legacy x86 processors struggle to handle the intensive video processing tasks required for modern streaming. Solution: NETINT reimagined the video transcoding server by combining their Quadra VPUs with Ampere® Altra® Processor, creating a smaller, faster, and more cost-effective server. This new server architecture allows for advanced video processing capabilities, including AI inference tasks and automated subtitling using OpenAI’s Whisper. Key features: High performance: Capable of simultaneously transcoding multiple video streams (e.g., 95x 1080i30, 195x 720i30).Cost-effective: Reduces operational costs by 80% compared to traditional x86-based solutions.Advanced processing: Supports deinterlacing, software decoding, and AI inference tasks.Flexible control: Managed via FFmpeg, GStreamer, SDK, or NETINT’s Bitstreams Edge application interface. Technical innovations: Custom ASICs: NETINT’s proprietary ASICs for high-quality, low-cost video processing.Ampere® Altra® processor: Provides unprecedented efficiency and performance, optimized for dense computing environments.Optimized software: Utilizes the latest FFmpeg releases and Arm64 NEON SIMD instructions for significant performance improvements. Impact: The collaboration between NETINT, Supermicro, and Ampere has resulted in a groundbreaking live video server that: Increases throughput by 20x compared to software on x86. Operates at a fraction of the cost. Expands system functionality to support video formats not natively supported by NETINT’s VPU. Enables accurate, real-time transcription of live broadcasts through automated subtitling. Introduction The demand for high-quality live video streaming has grown exponentially in recent years. In both developed and emerging markets, operational costs are under pressure while user expectations are expanding. This led NETINT to reimagine the video transcoding server, resulting in a live video server that opens new video processing capabilities created in collaboration with Supermicro and Ampere Computing. A unique aspect of this architecture is that while NETINT VPUs handle intensive video encoding and transcoding, a powerful host CPU can perform additional functions, such as deinterlacing and software decoding, that the VPU doesn’t support in hardware. Additionally, a powerful host CPU can perform AI inference tasks. NETINT recently announced the industry-first automated subtitling using OpenAI’s Whisper, optimized for the Ampere® Altra® processor, which enables accurate, real-time transcription of live broadcasts. This server performs video deinterlacing and transcoding in a dense, high-performance, and cost-effective manner not possible with legacy x86 processors. Powered by Ampere CPUs, the server performs video processing and transcoding tasks in a dense, high-performance, and cost-effective manner that is not possible with x86 processors. Video engineers control the server via FFmpeg, GStreamer, SDK, or NETINT’s Bitstreams Edge application interface, making it accessible for deploying and replacing existing transcoding resources or in greenfield installations. This case study discusses how NETINT, Supermicro, and Ampere engineers optimized the system to deliver a reimagined video server that simultaneously transcodes 95x 1080i30 streams, 195x 720i30 streams, 365x 576i30 streams, or a combined 100x 576i, 100x 720i, 10x 1080i, 40x 1080p30, 40x 720p30, and 10x 576p streams in a single Supermicro MegaDC SuperServer ARS-110M-NR 1U server. This server expands the system functionality by enabling video formats not natively supported by NETINT’s VPU, such as decoding 96 incoming 1080i30 H.264 or H.265 streams via Ampere® Altra® processor and 320 incoming 1080i MPEG-2 streams. “The punchline is that with an Ampere® Altra® Processor and NETINT VPU, a Supermicro 1U server unlocks a whole new world of value,” Alex Liu, Co-founder, NETINT. NETINT’s Vision Responding to customers’ concerns about limited CPU processing and skyrocketing power costs, NETINT built a custom ASIC for one purpose: highest-quality, lowest-cost video processing and encoding. NETINT reinvented the live video transcoding server by combining NETINT Quadra VPUs with Ampere® Altra® processor to create a smaller and faster server that costs 80% less to operate and increases throughput by 20x compared to software on x86. Requirements to Reinvent the Video Server Engineer it smaller and faster. Make it cost 80% less to operate.Increase throughput by 20x. Why NETINT Chose Ampere Processors NETINT was already familiar with Ampere Computing’s high-performance and low-power processors, which perfectly complement NETINT’s Quadra VPUs. The Ampere® Altra® Cloud Native Processor is designed for a new era of computing and an energy-constrained world — delivering unprecedented efficiency and performance. From web and video service infrastructure to CDNs to demanding AI inference, Ampere products are the most efficient dense computing platforms on the market. The benefits of using a Cloud Native Processor like Ampere® Altra® include improved efficiency and scalability, which have great synergy with NETINT’s high-performance and energy-efficient VPUs. Problem Could Ampere® Altra® simultaneously deinterlace 100 576i, 100 720i, and 10 1080i simultaneous video streams that legacy x86 processors couldn’t in a cost-effective 1RU form factor? How Ampere Responded Engineers from NETINT, Supermicro, and Ampere unlocked the high performance of NETINT’s Quadra VPU and Ampere® Altra® 96-core processor to redefine the live-stream video server. Initial results with Ampere® Altra® using FFmpeg 5.0 were encouraging compared to legacy x86 processors, but didn’t meet NETINT’s goal to increase throughput by 20x while reducing costs by 80%. Ampere engineers studied different deinterlacing filters available in FFmpeg and investigated recent Arm64 optimizations available in recent FFmpeg releases. An FFmpeg avfilter patch that provides optimized assembly implementation using Arm64 NEON SIMD instructions showed a significant performance increase in video deinterlacing with up to 2.9x speedup using FFmpeg 6.0 compared to FFmpeg 5.0. With all architectures, and especially true for the Arm64 architecture, using the “latest and greatest” versions of software is recommended to take advantage of performance improvements. Performance Challenges NETINT, Supermicro, and Ampere engineers went to work running the full video workload, combining CPU-based video deinterlacing and transcoding using NETINT’s Quadra VPUs. With outstanding results just running the deinterlacing jobs, initial results running the full video workload didn’t meet the performance target. Combining their broad expertise in hardware and software optimization, the team analyzed, root-caused, and were able to meet the aggressive requirements and, in the end, used just 50-60% of the Ampere® Altra® Processor’s CPU utilization, allowing headroom for future features. The initial results didn’t meet the target of simultaneously transcoding 100x 576i, 100x 720i, 10x 1080i, 40x 1080p30, 40x 720p30, and 10x 576p input videos. Investigating the performance showed that initially, the performance was close to the goal, yet unexpectedly slowed down over time. Following the performance methodology outlined in Ampere’s tutorial, “Performance Analysis Methodology for Optimizing Altra Family CPUs,” by first characterizing platform-level performance metrics. Figure 2 shows the mpstat utility data: initially, the system was running within ~4% of the performance target yet was only running at ~71% overall CPU utilization, with ~36% in user space (mpstat %usr), and ~35% in system-related tasks — kernel time (mpstat %sys), waiting for IO (mpstat’s %iowait), and soft interrupts (mpstat %soft). The fact that the system was idle ~29% of the time indicated that something was blocking performance. With the large percentage in software interrupts and IO wait time, we initially investigated interrupts using the softirq tool in BCC, which provides BPF-based Linux IO analysis, networking, monitoring, and more. The softirq tool traces the Linux kernel calls to measure the latency for all the different software interrupts on the system, outputting a histogram showing the latency distribution. The BCC tools are very powerful and easy to run. It showed ~20 microseconds average latency in the driver used by NETINT’s VPU while handling ~40K interrupts/s. As our performance problem was of the order of milliseconds, the BCC softirq tool showed that software interrupts weren’t limiting performance, so we continued to investigate what was limiting performance. Next, we used the perf record/perf report utilities to measure various Performance Measurement Unit (PMU) counters to characterize the low-level details of how the application was running on the CPU, looking to pinpoint performance bottleneck(s). As we initially didn’t know what was limiting performance, we collected PMU counter data to measure CPU utilization (CPU cycles, CPU instructions, Instructions per Clock, frontend, and backend stalls), cache and memory access, memory bandwidth, and TLB access. As the system after reboot reached ~96% of the performance target and degraded to ~60% after running many jobs, we collected perf data after reboot and when the performance was poor. Analyzing PMU data to identify the largest differences between good- and poor-performance cases, the kernel function alloc_and_insert_iova_range stood out, consuming 40x more CPU cycles in the poor-performance case. Searching the Linux kernel source code via the very powerful live grep website showed that this function is related to IOMMU. Rebooting the kernel with the iommu.passthrough=1 option resolved the performance degradation over time issue by reducing TLB miss rate. We were at ~96% of the performance target, so we were close but needed extra performance to meet our goals! NETINT engineers made the final performance speedup. They saw additional Arm64 deinterlacing optimizations available in FFmpeg mainline, which met our performance goals while reducing the overall CPU utilization to 50-60%, down from 70%. The Results The result is the NETINT 300 Channel Live Stream Video Server Ampere Edition based on a collaboration of NETINT, Supermicro, and Ampere, which can simultaneously transcode 95x 1080i30 streams, 195x 720i30 streams, 365x 576i30 streams, or a combined 100x 576i, 100x 720i, 10x 1080i, 40x 1080p30, 40x 720p30, and 10x 576p streams in a Supermicro MegaDC SuperServer ARS-110M-NR 1U server. This server expands the system functionality to enable running video workloads that require a high-performance CPU in a dense, power-efficient, and cost-effective 1U server. Call to Action NETINT’s vision to reimagine the live video server based on customer demands resulted in the NETINT Quadra Video Server Ampere Edition in a Supermicro 1U server chassis, unlocking a whole new world of value for customers who need to run video workloads that require high-performance CPU processing in addition to video transcoding with NETINT’s VPUs. Alex Liu and Mark Donningan from NETINT, Sean Varley from Ampere Computing, and Ben Lee from Supermicro have a webinar available to watch on NETINT’s YouTube channel, “How to Build a Live Streaming Server that delivers 300 HD interlaced channels,” which provides additional information. Other video workloads that are excellent to run on this server include AI inference processing, which NETINT recently announced and demonstrated at NAB 2024 - NETINT unveiled the Industry-First Automated Subtitling Feature with OpenAI Whisper running on Ampere. About the Companies NETINT Founded in 2015, NETINT’s big dream of combining the benefits of silicon with the quality and flexibility of software for video encoding using proprietary ASICs is now a reality. As the first commercial vendor of video-processing-specific silicon, NETINT pioneered the video processing unit (VPU). Nearly 100,000 NETINT VPUs are deployed globally, processing over 300 billion minutes of video. Supermicro Supermicro is a global technology leader committed to delivering first-to-market innovation for Enterprise, Cloud, AI, Metaverse, and 5G Telco/Edge IT Infrastructure, with a focus on environmentally friendly and energy-saving products. Supermicro uses a building blocks approach to allow for combinations of different form factors, making it flexible and adaptable to various customer needs. Their expertise includes system engineering, focused on the importance of validation, and ensuring that all components work together seamlessly to meet expected performance levels. Additionally, they optimize costs through different configurations, including choices in memory, hard drives, and CPUs, which together make a significant difference in the overall solutions that Supermicro provides. Ampere Computing Ampere is a semiconductor design company for a new era, leading the future of computing with an innovative approach to CPU design focused on high-performance, energy-efficient AI compute. As a pioneer in the new frontier of energy-efficient high-performance computing, Ampere is part of the Softbank Group of companies, driving sustainable computing for AI, Cloud, and edge applications. For more information, visit amperecomputing.com. To find more information about optimizing your code on Ampere CPUs, check out our tuning guides in the Ampere Developer Center. You can also get updates and links to more great content like this by signing up for our monthly developer newsletter. If you have questions or comments about this case study, there is an entire community of Ampere users and fans ready to answer at the Ampere Developer community. And be sure to subscribe to our YouTube channel for more developer-focused content. Check out the full Ampere article collection here.

By John Oneill
Refactoring a Legacy React Monolith With Autonomous Coding Agents
Refactoring a Legacy React Monolith With Autonomous Coding Agents

I've been wrangling React codebases professionally for well over ten years now, and honestly, the story is always the same in 2026: teams inherit these massive, everything-in-one-place apps built back when Create React App felt like the future. All the logic — auth, shopping cart, product lists, user profiles — lives in a handful of giant files. Props get drilled six levels deep, the state is scattered, and nobody wants to touch it because one wrong move brings the whole thing down. Last year, I led a refactor on a five-year-old dashboard exactly like that. We managed to break it into proper feature slices and even laid the groundwork for microfrontends. The thing that made the biggest difference? A multi-agent AI setup that did a lot of the heavy lifting for us. It wasn't magic — it still needed human eyes — but it turned a three-month nightmare into something we wrapped in five weeks. In this piece, I'll walk you through how I built that system. We'll take a messy little React monolith (the kind you see everywhere) and let a team of AI agents analyze it, plan the refactor, write the new modular code, add tests, and review everything. We'll use LangGraph to orchestrate the agents and Claude 3.5 Sonnet as the LLM (though GPT-4o works fine too). What You'll Need Nothing exotic: Node 20+ and your package manager of choice.Python for the agent orchestration (LangChain/LangGraph live there — it's still the most reliable option).An Anthropic API key (or OpenAI). Just export it as ANTHROPIC_API_KEY.Git and VS Code. I lean heavily on the Cursor extension these days for quick diff reviews. Grab the sample app we'll be working with — a tiny e-commerce dashboard where login, product list, and cart are all crammed into src/App.js. It's deliberately ugly, but painfully realistic. Here's the heart of the mess: JavaScript import React, { useState } from 'react'; import './App.css'; function App() { const [user, setUser] = useState(null); const [cart, setCart] = useState([]); const [products] = useState([{ id: 1, name: 'Widget', price: 10 }]); const login = (username, password) => { if (username === 'admin') setUser({ username }); }; const addToCart = (product) => { setCart([...cart, product]); }; return ( <div className="App"> {!user ? ( <form onSubmit={(e) => { e.preventDefault(); login(e.target.username.value, e.target.password.value); }> <input name="username" placeholder="Username" /> <input name="password" type="password" /> <button>Login</button> </form> ) : ( <> <h1>Welcome, {user.username}</h1> <div> <h2>Products</h2> {products.map(p => ( <div key={p.id}> {p.name} - ${p.price} <button onClick={() => addToCart(p)}>Add to Cart</button> </div> ))} </div> <div> <h2>Cart ({cart.length})</h2> {/* cart items would go here */} </div> </> )} </div> ); } export default App; You get the idea: everything lives in one component, auth is fake and insecure, no routing, no code splitting. Why Legacy React Apps Are Such a Pain Most big companies are still running apps that started life pre-React 18. Giant components, prop drilling everywhere, bundle sizes that make mobile users cry. Adding a new feature means touching half the codebase and praying the tests (if they exist) still pass. Agentic workflows help because they can read the whole thing at once, spot patterns we miss when we're deep in the weeds, and churn out consistent modular code faster than any human could. The Agent Team I run five specialized agents that hand work off to each other: Analyzer – reads the code and produces a structured report.Planner – turns that report into concrete steps.Coder – writes the actual refactored files.Tester – generates meaningful tests.Reviewer – catches anything that slipped through. The Analyzer we already made pretty thorough in the last version. Let's spend more time on the two that do the real work: Coder and Tester. Coder Agent This is the one that actually moves code around. I've learned the hard way that vague prompts lead to broken imports and forgotten lazy loading, so I lock it down pretty tight. Here's the system prompt I use: Python coder_prompt = ChatPromptTemplate.from_messages([ ("system", """You're a senior React engineer whose specialty is cleaning up old monoliths. Implement the refactor plan exactly—no creative detours. Rules I always follow: - Functional components and hooks only. - Feature-sliced layout: src/features/auth/, src/features/products/, src/features/cart/ - React Router v6+ with proper <Routes> and <Route> - Every route component wrapped in React.lazy() + Suspense for code splitting - Shared state lives in dedicated contexts under src/context/ - Forms are fully controlled (no e.target.username nonsense) - Components stay small and focused - Relative imports must be correct in the new structure - Don't add new dependencies unless the plan explicitly says so Output must be a JSON object: keys are full file paths, values are complete file contents. Include every new or changed file. Nothing else."""), ("user", """Analysis JSON: {analysis_json} Original files: {original_files} Plan: {plan}""") ]) Tester Agent Good tests are what keep me from losing sleep after a refactor. The tester prompt forces realistic RTL/Jest tests: Python tester_prompt = ChatPromptTemplate.from_messages([ ("system", """You're a frontend testing specialist. Write clean, useful tests with React Testing Library and Jest. For every important new or changed component: - Test rendering and key interactions - Use proper roles and accessible queries - Mock contexts when needed - Include at least one error/empty state test where it makes sense - Keep tests focused—aim for meaningful coverage, not 100% theater Output JSON: keys are test file paths (e.g. src/features/auth/LoginForm.test.jsx), values are full test files."""), ("user", "Refactored files: {refactored_files}") ]) What Happens When We Run It Feed the original App.js into the workflow. The Analyzer spots the usual suspects — high-severity coupling, oversized component, no code splitting, insecure auth — and gives us a nice JSON plan. Coder takes that plan and produces things like: A proper LoginForm.jsx with controlled inputsSeparate ProductsList.jsx and Cart.jsxContext providers for auth and cartAn AppRoutes.jsx that looks roughly like this: JavaScript import React, { Suspense } from 'react'; import { BrowserRouter, Routes, Route, Navigate } from 'react-router-dom'; const LoginForm = React.lazy(() => import('./features/auth/LoginForm')); const ProductsList = React.lazy(() => import('./features/products/ProductsList')); const Cart = React.lazy(() => import('./features/cart/Cart')); function AppRoutes() { return ( <BrowserRouter> <Suspense fallback={<div>Loading...</div>}> <Routes> <Route path="/login" element={<LoginForm />} /> <Route path="/products" element={<ProductsList />} /> <Route path="/cart" element={<Cart />} /> <Route path="*" element={<Navigate to="/login" />} /> </Routes> </Suspense> </BrowserRouter> ); } export default AppRoutes; Tester then writes solid tests — one of my favorites from a real run: JavaScript import { render, screen, fireEvent } from '@testing-library/react'; import LoginForm from './LoginForm'; import { AuthContext } from '../../context/AuthContext'; const renderWithContext = (ui, { user = null, login = jest.fn() } = {}) => { return render( <AuthContext.Provider value={{ user, login }> {ui} </AuthContext.Provider> ); }; test('submits credentials correctly', () => { const mockLogin = jest.fn(); renderWithContext(<LoginForm />, { login: mockLogin }); fireEvent.change(screen.getByPlaceholderText('Username'), { target: { value: 'admin' } }); fireEvent.change(screen.getByLabelText(/password/i), { target: { value: 'secret' } }); fireEvent.click(screen.getByRole('button', { name: /login/i })); expect(mockLogin).toHaveBeenCalledWith('admin', 'secret'); }); The Reviewer usually asks for one or two small tweaks (like adding a redirect after login), we loop back to Coder, and we're done. Running the Tests and Shipping npm test on the generated suite usually passes after the first or second iteration. Bundle size drops noticeably once the lazy loading is in place. I still review every diff in Cursor — AI doesn't get a free pass — but the volume of clean, consistent code it produces is night-and-day compared to doing it all manually. Lessons From the Trenches The detailed, structured prompts are what make this actually usable in real projects. Loose instructions = chaos. JSON output with file paths = easy automation. We've used this pattern on much larger apps (10–15k lines) and consistently needed only minor manual fixes afterward. Important Caveats If You're Thinking of Running This on Your Own Monolith Look, this setup works great on small-to-medium apps (a few hundred to a couple thousand lines), and it's a fantastic way to prototype a refactor or clean up a prototype. But before you point it at your company's million-line dashboard, here are the realities I've run into: Token limits are real. Even Claude 3.5's 200k context window fills up fast on anything bigger than a modest app. You'll need to chunk the codebase — feed in one feature or directory at a time — or build smarter retrieval tools (like vector search over your repo). Full-app refactors in one shot just aren't feasible yet.Hallucinations and subtle bugs happen. The agents are good, but they can invent imports that don't exist, miss edge cases in business logic, or subtly change behavior. Never merge without a thorough human diff review. In our bigger projects, we treat the AI output as a very smart PR draft, not final code.Costs add up. Running multiple agents with long contexts on a large codebase can burn through hundreds of dollars in API credits quickly. Start small and monitor usage.Non-code concerns get ignored. Package.json changes, build config, environment variables, and custom webpack setups — these agents won't touch them unless you explicitly add tools for it.It's best for mechanical refactors. Extracting components, adding routing, introducing contexts, code splitting — these are where it shines. Complex domain logic migrations or performance optimizations still need heavy human involvement.Top-tier companies are experimenting, not relying. Places like Meta, Google, and Amazon are piloting agentic workflows internally, but they're wrapping them in heavy guardrails, custom retrieval systems, and mandatory review gates. Full autonomy on critical monoliths isn't happening yet — think 30–50% productivity boost on targeted tasks, not full replacement. Use this as an accelerator, not a silver bullet. Start with one bounded feature, let the agents propose the changes, review and tweak, then expand. That's how we've gotten real wins without disasters. Wrapping Up If you're staring at a legacy 0 right now, give this approach a shot. It's not about replacing engineers — it's about letting us focus on the hard problems instead of endless boilerplate and busywork. I'd love to hear what your biggest React refactor headache is at the moment. Drop it in the comments — maybe we can figure out how to tackle it next. Happy (and much less painful) refactoring!

By Rajiv Gadda

Top JavaScript Experts

expert thumbnail

John Vester

Senior Staff Engineer,
Marqeta

IT professional with 30+ years expertise in app design and architecture, feature development, and project and team management. Currently focusing on establishing resilient cloud-based services running across multiple regions and zones. Additional expertise architecting (Spring Boot) Java and .NET APIs against leading client frameworks, CRM design, and Salesforce integration.
expert thumbnail

Justin Albano

Software Engineer,
IBM

I am devoted to continuously learning and improving as a software developer and sharing my experience with others in order to improve their expertise. I am also dedicated to personal and professional growth through diligent studying, discipline, and meaningful professional relationships. When not writing, I can be found playing hockey, practicing Brazilian Jiu-jitsu, watching the NJ Devils, reading, writing, or drawing. ~II Timothy 1:7~ Twitter: @justinmalbano

The Latest JavaScript Topics

article thumbnail
Boosting React.js Development Productivity With Google Code Assist
Google Code Assist boosts React.js productivity by generating context-aware code inside VS Code, helping developers move faster without sacrificing code quality.
April 23, 2026
by Rajgopal Devabhaktuni
· 364 Views
article thumbnail
Why Angular Performance Problems Are Often Backend Problems
Your Angular app isn’t slow your API is. Fix backend bottlenecks like request waterfalls, overfetching, and slow queries before touching a single Angular component.
April 17, 2026
by Bhanu Sekhar Guttikonda
· 1,491 Views
article thumbnail
Faster Releases With DevOps: Java Microservices and Angular UI in CI/CD
Jenkins automates build, containerizes, and deploys to AWS on every Git commit across Java microservices and Angular apps.
April 14, 2026
by Kavitha Thiyagarajan
· 1,356 Views
article thumbnail
Intent-Driven AI Frontends: AI Assistance to Enterprise Angular Architecture
An Angular application assisted by AI can convert natural language requests into data queries while maintaining complete control over execution and governance.
April 9, 2026
by Lavi Kumar
· 2,192 Views · 1 Like
article thumbnail
How We Reduced LCP by 75% in a Production React App
We had a production React app with major performance issues, but a rewrite wasn't practical. This article illustrates the ways we made it better.
April 8, 2026
by Satyam Nikhra
· 2,295 Views
article thumbnail
Reduce Frontend Complexity in ASP.NET Razor Pages Using HTMX
Modern web development often defaults to heavy client-side frameworks for applications, a good alternative is HTMX with Asp.Net Razor Pages.
April 6, 2026
by Akash Lomas
· 1,801 Views
article thumbnail
Integrating OpenID Connect (OIDC) Authentication in Angular and React
This article shows how to integrate OIDC using Authorization Code Flow with PKCE — the recommended approach for SPAs — in Angular and React.
April 6, 2026
by Renjith Kathalikkattil Ravindran
· 1,862 Views
article thumbnail
Swift: Master of Decoding Messy JSON
Learn how to decode messy flat JSON in Swift using dynamic CodingKeys, clean models, and custom Decodable logic for scalable, production-ready apps.
March 25, 2026
by Pavel Andreev
· 767 Views
article thumbnail
Stranger Things in Java: Enum Types
Java enum types are more than named constants. This article explains how they work and why they matter in real Java applications.
March 16, 2026
by Claudio De Sio Cesari
· 3,526 Views · 1 Like
article thumbnail
Beyond the Chatbot: Engineering a Real-World GitHub Auditor in TypeScript
Learn to architect and build an autonomous GitHub triage agent using TypeScript and LangChain that is intelligent enough to be a senior maintainer.
March 13, 2026
by Anujkumarsinh Donvir DZone Core CORE
· 2,402 Views · 1 Like
article thumbnail
Building a Unified API Documentation Portal with React, Redoc, and Automatic RAML-to-OpenAPI Conversion
Learn how to build a modern static API documentation portal that supports both OpenAPI 3.x and RAML 1.0 specifications with automatic conversion.
March 11, 2026
by Sreedhar Pamidiparthi
· 4,783 Views
article thumbnail
Infrastructure as Code Is Not Enough
Learn about why Infrastructure as Code alone can't ensure reliability and how intent, policy, and feedback loops create self-correcting, resilient systems.
March 4, 2026
by Venkatesan Thirumalai
· 2,114 Views
article thumbnail
A Guide to Parallax and Scroll-Based Animations
Step-by-step instructions for creating a parallax background effect with CSS only. Accessibility fallback and JS alternatives.
February 13, 2026
by Hanna Labushkina
· 2,135 Views · 1 Like
article thumbnail
Playwright Fixtures vs. Lazy Approach
A guide comparing Fixture and Lazy patterns in test automation, showing how on-demand object creation improves performance and scalability in Playwright frameworks.
February 11, 2026
by Kailash Pathak DZone Core CORE
· 1,796 Views
article thumbnail
String to Unicode Converter Utility
Java utility to convert strings to Unicode sequences and back, solve encoding issues, debug text problems, and ensure cross-system text compatibility.
February 5, 2026
by Michael Gantman
· 463 Views · 1 Like
article thumbnail
Building a 300 Channel Video Encoding Server
NETINT VPU Technology with Ampere® Altra® Processors set new operational cost and efficiency standards
February 3, 2026
by John Oneill
· 2,164 Views · 1 Like
article thumbnail
Refactoring a Legacy React Monolith With Autonomous Coding Agents
A step-by-step guide to building multi-agent AI workflows with LangGraph that can analyze, plan, code, test, and review the refactoring of a legacy React monolith.
January 22, 2026
by Rajiv Gadda
· 1,714 Views
article thumbnail
Micro Frontends in Angular and React: A Deep Technical Guide for Scalable Front-End Architecture
Module Federation, Custom Elements, and orchestrators like Single-SPA enable independently evolving, maintainable applications with a seamless user experience.
January 16, 2026
by Renjith Kathalikkattil Ravindran
· 2,703 Views · 1 Like
article thumbnail
Resilient API Consumption in Unreliable Enterprise Networks (TypeScript/React)
Build robust React clients with explicit timeouts, retries, circuit breakers, cancellation, idempotency, and optimistic UI, plus Axios–RTK Query integration guidance.
January 15, 2026
by Hanna Labushkina
· 3,197 Views · 3 Likes
article thumbnail
Supercharge AI Workflows on Azure: Remote MCP Tool Triggers + Your First TypeScript MCP Server
Remote MCP in Azure Functions exposes serverless tools for AI assistants, enabling scalable, cloud-native workflows with Azure services and bindings.
January 13, 2026
by Swapnil Nagar
· 1,927 Views · 1 Like
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • ...
  • Next
  • RSS
  • X
  • Facebook

ABOUT US

  • About DZone
  • Support and feedback
  • Community research

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 215
  • Nashville, TN 37211
  • [email protected]

Let's be friends:

  • RSS
  • X
  • Facebook
×