close
Index / Cases / Tune

Extending a music streaming platform with AI features

A music streaming service approached Evrone to improve its codebase and help roll out new AI-powered features. The app brings together personalized music streaming, podcasts, and audiobooks in one product. Our team joined the project to make the platform faster, more stable, and easier to scale.

April 2026 6 mins

The client, whose name is confidential under an NDA, runs a catalog of over 75 million tracks. The platform supports offline listening, music recognition, and virtual assistant integrations. It includes features designed for children, audiobook listeners, professional musicians, podcasters, and record labels.

The product has already moved beyond the startup step into a full-grown platform. Evrone’s outstaff team was brought in to support a rapid expansion. Our experts integrated smoothly into client’s technical teams across analytics, machine learning, and Python and Go development.

Building AI features for user acquisition and retention

AI-powered recommendations are now at the core of any music streaming product. Users expect the platform to know their taste and deliver a deeply personalized experience. The way each service achieves this is their own trade secret and competitive edge: some rely on a massive user pool, analyzing behavior to predict the next track, while others work with leaner datasets and combine different signals.

In this product AI tools are used beyond matching listeners with tracks and artists. The client’s team also applies them across the product to automate development work, improve the user experience, and test new features. Evrone’s ML engineers helped with improvement of two specific areas: user retention and recommendation system.

Retention is often driven by dark patterns. But here another approach based on habit modeling and prediction was used. The Customer Journey team's job is to surface exactly what a listener wants at any given moment: an upbeat playlist for a workout, a podcast for a commute, or an audiobook before bedtime.

The recommendation system plays a major role in bringing in new users as well. For example, we contributed to a personalization model designed to build playlists around both a listener’s tastes and what they are doing at the moment, with the goal of improving product metrics.

Getting recommendations right takes more than analyzing favorite genres and artists. Behavioral signals matter just as much: this track was played twice, that one was skipped after ten seconds. These breadcrumbs are what allow the platform's AI agents to hit a high level of accuracy.

Beyond recommendations, our team worked on another indirect but tangible driver of new user growth — LLM-based SEO descriptions generation. That helped rank artists and releases higher in search engines.

We also contributed to improving the platform's internal search using LLMs. This is especially important for a streaming service, where tracks and albums often have similar or even identical titles. Users need more than a text match, so the search must also consider context. A heavy metal fan searching for something shouldn't be given a pop song they never asked for.

Optimizing Python and Ruby on Rails code

Rewriting and optimizing code is not a thrilling task, but the impact is valuable and easy to measure. In audio and video streaming, refactoring pays off for two reasons at once: users notice speed and reliability in a competitive market, and the business benefits from lower infrastructure costs.

Evrone's developers worked on a classic optimization scenario: early on, the client had chosen technologies that made it possible to ship quickly and focus on growing the user base and finding product-market fit. That's a good strategy, but it comes with a trade-off: rising infrastructure costs as the product scales.

Once the service had grown beyond the MVP stage, the client decided to rewrite the most resourceful components in Go to bring those costs down. The key requirement for the migration was to preserve the existing behavior, for example the format of error responses, so other teams could keep working without disruption.

Rewriting the heaviest microservices resulted in a 25% reduction in resource consumption and a 20% performance improvement.

At the same time, we tackled the platform's internal architecture, splitting monoliths into microservices to make them easier to maintain and extend. One of those monoliths included the analytics platform for artists and labels, which was migrated from Ruby on Rails to Python. The client was fully satisfied with the functionality of the service, so the task was not to rethink the product, but to integrate it into the company’s broader Python ecosystem.

Our backend engineer assigned to the project had strong expertise in both Rails and Python, and led the migration. The result is now used extensively, giving artists and labels detailed analytics on their listeners and play data.

Bringing AI tools into the development workflow

As technical experts with a wide experience we also proposed several ideas for using AI to automate deployment and manage infrastructure through Git. One example is an AI agent designed to handle routine developer tasks and support the QA team with automated testing. 

QA engineers' time is limited, so the goal was to focus it where it matters most. Simple tasks that follow a predictable, repeatable pattern were handed off to an AI built on open-source models available within the streaming platform's own infrastructure, including Qwen and gpt-oss.

Integrated into the CI/CD pipeline, the tool analyzes a service’s code, populates the test database with the required data, identifies available endpoints, tests them in both positive and negative scenarios, and then generates a report. That helps developers spot issues during a code-review and hand over higher-quality code to QA, saving their time.

The same agent also fills in missing documentation for the codebase, handles straightforward tasks such as checking whether the README still matches the current implementation, or assists with a code review.

In practice, this kind of agent can be embedded into almost any workflow where the job can be broken into sequential steps and described as an instruction. Once the configuration is defined and the pipeline is launched, the AI assistant can take it from there.

At Evrone, we build AI assistants and automate business processes, from smart parsers and chatbots to autonomous multi-agent systems. Want to bring new AI capabilities to your product? Talk to us.

Let’s talk

Our engineers continue to work on a wide range of internal services and use cases, including improving communication between microservices to ensure the platform runs more efficiently.

The result: better metrics, lower costs

Evrone’s specialists became a fully integrated part of the client’s internal teams, demonstrating that outstaffing done right can match the engagement and output of an in-house team.

We rebuilt and optimized the heaviest services in Go, cutting infrastructure costs by 20–30% and improving performance. On top of that, we introduced AI-driven automation across testing, documentation, and code review. Every solution complied with law-related restrictions and security requirements, so the project relied only on locally deployed open-source tools.

We strengthened the client’s ML teams working on features tied directly to product metrics such as MAU growth, retention, and reactivation. Our ML engineers also proposed new ideas that made their way into the core product.

If you are looking for a team with the same level of involvement, reach us through the form below and we will get in touch to discuss the right Evrone team setup for your project.

Evrone can also build a streaming service for you from the ground up. We design reliable architectures, set up smooth data streaming, and ensure stable content delivery. Take a look at what we have built for other clients in this area, and get in touch to work with an experienced team that is ready to start.

Let’s talk about you

Attach file
Files must be less than 8 MB.
Allowed file types: jpg jpeg png txt rtf pdf doc docx ppt pptx.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.