<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by :probabl. on Medium]]></title>
        <description><![CDATA[Stories by :probabl. on Medium]]></description>
        <link>https://medium.com/@probabl?source=rss-3364d04aa73a------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Sun, 26 Apr 2026 02:16:50 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@probabl/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[How certifications signal your AI skills to recruiters: A conversation with Dr. Tereza Iofciu]]></title>
            <link>https://medium.com/probabl/how-certifications-signal-your-ai-skills-to-recruiters-a-conversation-with-dr-tereza-iofciu-2b990b815260?source=rss-3364d04aa73a------2</link>
            <guid isPermaLink="false">https://medium.com/p/2b990b815260</guid>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[education]]></category>
            <category><![CDATA[data-science]]></category>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[python]]></category>
            <dc:creator><![CDATA[:probabl.]]></dc:creator>
            <pubDate>Tue, 14 Apr 2026 14:08:14 GMT</pubDate>
            <atom:updated>2026-04-14T16:42:49.340Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*7YBrcJZDQn9iXj4CX1X6zQ.png" /></figure><p><em>By </em><strong><em>Arturo Amor</em></strong><em>, ML Engineer at Probabl and scikit-learn core contributor</em></p><p>I am continuing my interview series exploring the shifting landscape of the data science and AI skills ecosystem, bringing together perspectives from both academic research and industry practice.</p><p>In my previous conversation with <a href="https://blog.probabl.ai/the-value-of-certifying-ml-skills">Dr. Fabian Stephany from the University of Oxford</a>, we discussed the empirical findings of his SkillScale project. His research has found that having machine learning skills results in the largest wage premium for professionals (40%), and that having verifiable certifications in your resume increases your likelihood of landing an interview by 15% in an increasingly crowded job market.</p><p>Today, I am moving from the lab to technical training that takes place on the ground with Dr. Tereza Iofciu. Tereza leads the Department for Data Science, AI Engineering, and AI Product Management at neuefische, one of Germany’s premier bootcamps for tech talent. Tereza and her team are at the forefront of developing curricula and leading the intensive training programs that equip learners and career-switchers with the technical skills required to land their next job. As they put it on their website, they “tech you to the next level.”</p><p>In this interview, we dive into the practical realities of AI skills training in the German and European job markets. Tereza shares her insights on why business acumen is now inseparable from technical depth, the challenges for jobseekers applying for junior roles, and how the rise of generative AI tools are making human relationships and accredited certifications more relevant than ever. Whether you’re a hiring manager or a jobseeker, Tereza offers a timely perspective on how maintaining a learner mindset is an effective strategy for adapting to new AI tools, the changing job market, and faster, AI-augmented delivery cycles in enterprises.</p><h3>A conversation with Dr. Tereza Iofciu</h3><p><strong>Dr. Arturo Amor: </strong>Tereza, please tell me a bit about yourself and what you do at neuefische?</p><p><strong>Dr. Tereza Iofciu: </strong>I’m currently leading the Department for Data Science, AI Engineering, and AIPM, which besides managing a fairly large teaching team, also involves teaching in bootcamps and keeping the curriculum up to date with the latest trends.</p><p><strong>Dr. Arturo Amor: </strong>neuefische has been training and placing tech talent in Germany for years. Could you give me a sneak peak into the top trends you’ve observed with regards to how the job market and demand for technical data science and AI skills have been shifting?</p><p><strong>Dr. Tereza Iofciu: </strong>I’ve always been a promoter of combining business acumen with data science skills. This trend seems to have become stronger now. It’s no longer enough to be technically good, you also need to build the right things. Especially now, when with the advances of Gen AI, there is an expectation that everything can be built faster. The more worrisome trend is though the disappearance of junior roles and mid level roles.</p><p><strong>Dr. Arturo Amor: </strong>I recently spoke with Dr. Fabian Stephany from the University of Oxford, whose research shows that formal ML and AI certifications from bootcamps and trusted providers are paying off more and more. In particular, he and his team have found that:</p><ol><li>By having general AI-related skills, professionals can earn 21% more than those who don’t have such skills. With machine learning skills, that boost reaches up to 40%.</li><li>As generative AI makes it increasingly hard for recruiters to identify candidates with genuine skills, having certified machine learning skills in your resume increases the likelihood of landing an interview invitation by 15%.</li></ol><p>From your vantage point working with both learners and employers, does that match what you’re seeing at neuefische?</p><p><strong>Dr. Tereza Iofciu: </strong>I’ve also observed that the job market has changed. On one side, people are struggling to find roles; and on the other side, companies are struggling to find people. When a job posting is published, it’s flooded either with AI generated applications or unrealistic expectations. Thus, it makes sense that accredited certifications start having more weight again. Similarly though candidates are faced with AI interviews… It would be cool to also have certifications for companies!</p><p><strong>Dr. Arturo Amor: </strong>When the companies you partner with are evaluating candidates for ML or AI roles, what types of evidence of skills in resumes do they care about the most?</p><p><strong>Dr. Tereza Iofciu: </strong>I would say the usual: technical skills, clear communication, and of course readiness to work with AI while being mindful of its limitations. Additional to this, a beginner mindset is a door opener. This is usually something quite a lot of career changers who go through bootcamps have. Plus their multidisciplinary experience.</p><p><strong>Dr. Arturo Amor: </strong>Which ML and AI skills are you seeing the most demand for right now, and where are the sharpest gaps between what learners arrive with and what employers need?</p><p><strong>Dr. Tereza Iofciu: </strong>This is a good question. I would say being able to do your work well. Due to AI coding tools, it is becoming increasingly difficult to keep people motivated to learn first how to code, do data analysis and train models themselves. Thinking of the sharpest gap, maybe it’s the systems thinking around data and ML and AI, which makes sense as most people coming into bootcamps do come for getting applied experience in these fields.</p><p><strong>Dr. Arturo Amor: </strong>Some job markets are known for favoring academic credentials over certifications. Do you see that calculus shifting, and if so, what’s driving it?</p><p><strong>Dr. Tereza Iofciu: </strong>It really depends on the job. R&amp;D would still prefer academic credentials, as people are expected to work on long running research. On the other hand, industry jobs expect people to be more team and business oriented, and work in faster delivery cycles, which is what happens during all the project phases.</p><p><strong>Dr. Arturo Amor: </strong>If you had to advise someone seeking to enter the AI job market in Germany tomorrow, what would you recommend them to do today to land their dream job?</p><p><strong>Dr. Tereza Iofciu: </strong>Oh dear 😅 Go to meetups! Find a way to contribute to your local AI community, try to give workshops and talks. In the age of AI slop, networking has become relevant again, besides the fact that teaching helps you organize your thoughts and prepare you better for job interviews.</p><h3>About Dr. Tereza Iofciu</h3><p>Tereza Iofciu is a data and AI expert, leadership coach, and PSF Fellow with 15+ years of experience leading data and product teams at neuefische, FREE NOW, and New Work (XING). She helps professionals lead and adapt in the age of AI through her Data Diplomat Framework™, bridging technical depth with human leadership. Along side that she’s been volunteering in the Python Community and wore many hats over the years: PyLadies Hamburg organizer, Python Software Verband board member, NumFocus DISC Steering Committee member, various Python Software Foundation work groups and PyPodcats co-host.</p><h3>Learn more about Skolar by Probabl</h3><p>The scikit-learn core team created the <a href="https://www.fun-mooc.fr/en/courses/machine-learning-python-scikit-learn/">“Machine learning in Python with scikit-learn”</a> MOOC, following the open source philosophy to empower everyone, everywhere.</p><p>Since 2025, this mission is shared and actively supported by Probabl. We curate the <a href="https://skolar.probabl.ai/en/a/2592619793993357212;p=1;pa=0">Skolar MOOC</a>, which covers best practices for machine learning Python and the newest functionalities in scikit-learn. Most importantly, it provides learners with guidance to get hands-on experience in a world where code generated by agents has to be validated by capable humans in the loop. Learn more about the MOOC and Skolar certifications on our <a href="https://probabl.ai/certification">website</a>.</p><h3>For more from Probabl</h3><ul><li>Follow our latest updates on <a href="https://www.linkedin.com/company/probabl">LinkedIn</a></li><li>Subscribe to our <a href="https://hello.probabl.ai/subscribe-probabl-newsletter">monthly newsletter</a></li><li>Check out our technical explainer videos on <a href="https://www.youtube.com/@probabl_ai">YouTube</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=2b990b815260" width="1" height="1" alt=""><hr><p><a href="https://medium.com/probabl/how-certifications-signal-your-ai-skills-to-recruiters-a-conversation-with-dr-tereza-iofciu-2b990b815260">How certifications signal your AI skills to recruiters: A conversation with Dr. Tereza Iofciu</a> was originally published in <a href="https://medium.com/probabl">:probabl.</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The open source tools of enterprise data science: A conversation with Merel Theisen]]></title>
            <link>https://medium.com/probabl/the-open-source-tools-of-enterprise-data-science-a-conversation-with-merel-theisen-c4458616870f?source=rss-3364d04aa73a------2</link>
            <guid isPermaLink="false">https://medium.com/p/c4458616870f</guid>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[open-source]]></category>
            <category><![CDATA[python]]></category>
            <category><![CDATA[data-science]]></category>
            <category><![CDATA[machine-learning]]></category>
            <dc:creator><![CDATA[:probabl.]]></dc:creator>
            <pubDate>Tue, 31 Mar 2026 14:01:03 GMT</pubDate>
            <atom:updated>2026-03-31T14:01:03.050Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="Marie Sacksick, Director of Market Intelligence at Probabl, interviews Merel Theisen, Tech Lead of Kedro at McKinsey &amp; Co’s QuantumBlack, about how pen source tools like Skore and Kedro drive lasting value for enterprise data science teams." src="https://cdn-images-1.medium.com/max/1024/0*c5-PBS8H-a2FRB_N.png" /></figure><p><em>By </em><strong><em>Marie Sacksick</em></strong><em>, Director of Market Intelligence at Probabl</em></p><p>Ask any enterprise data science team what slows them down, and you’ll probably hear similar answers. Notebooks that work locally but fall apart in production. Pipeline code that only one person truly understands. New hires who spend their first weeks reverse-engineering what the team before them built. Exploring input data and cleaning it. These aren’t exotic edge cases — they are the norm.</p><p>This is where open source tools — from <a href="https://skore.probabl.ai/">Skore </a>by Probabl to <a href="https://kedro.org/">Kedro</a> by QuantumBlack — are making the difference for enterprise data science teams. This week, I sat down with Merel Theisen, Tech Lead of Kedro at QuantumBlack, to discuss how open source tools drive lasting value for enterprise data science teams.</p><h3>Building for enterprise data science teams</h3><p>At Probabl, we are driven by the conviction that the data science industry is failing enterprises — not for lack of compute or data, but for lack of rigor and structure. Most models never reach production. Reproducibility remains an aspiration. Knowledge walks out the door with every departing data scientist or engineer. And a generation of automated tools that promise magic instead deliver opacity, technical debt, and lock-in.</p><p>This conviction is inseparable from where we come from. Probabl was founded by the creators and maintainers of scikit-learn, the most downloaded Python library for machine learning. In March 2026 alone, <a href="https://clickpy.clickhouse.com/dashboard/scikit-learn">scikit-learn was downloaded over 200 million times</a>. We don’t observe the data science world from the outside. Our founders built the open source infrastructure it runs on, and keep doing so. And that history shapes what we do.</p><p>We’re building for enterprises that want to truly own their data science– those that prize tools that build institutional knowledge rather than concentrating it in black boxes or third-party platforms. As our CEO François Méro wrote recently, <a href="/skore-is-live">our four guiding principles</a> are: (1) science first, (2) composability, (3) reusability, and (4) transparency.</p><p>We’re putting these principles into practice with <a href="https://skore.probabl.ai/">Skore</a>, available as an open source library and as an enterprise platform that empower data science teams to collaborate, scale their practice, and increase the impact of their AI projects.</p><p>None of this is built in isolation, though. The tools from the broader open source ecosystem — and the vibrant communities that maintain them at the state of the art — are essential to how enterprises can own their data science.</p><p>Kedro, the Python framework for production-ready data pipelines, is an important piece of that puzzle. By giving teams a standardized project structure and a principled way to build production-ready data pipelines, it addresses many of the same structural problems we think about at Probabl every day: how to move from individual heroics to institutional practice, from one-off experiments to reproducible, auditable systems.</p><h3>A conversation with Merel Theisen</h3><p>To learn more about the design choices and vision guiding Kedro, I sat down with Merel Theisen, Tech Lead of Kedro and Principal Software Engineer at QuantumBlack. We discussed how Kedro is built and why, what a healthy open source data science ecosystem actually looks like in practice, and how tools like Kedro and Skore create value for enterprises.</p><p><strong>Marie Sacksick: </strong>Merel, for someone coming into this with zero context: what is Kedro and what problems does it solve for enterprise data science teams?</p><p><strong>Merel Theisen:</strong> Kedro is an open source Python framework hosted by the Linux Foundation. It brings software engineering best practices to data science and data engineering, giving teams a standardized way to build production-ready data pipelines. For enterprise teams specifically, it solves some very real pain points: inconsistent project structures across teams, code that works in notebooks but falls apart in production, and the difficulty of collaborating on pipeline code when everyone has their own way of doing things. Kedro gives you a common foundation. This way teams can focus on the actual data science rather than reinventing project scaffolding every time.</p><p><strong>Marie Sacksick: </strong>Data scientists rarely use a single tool in a vacuum. You may stitch together Kedro for data pipelining, scikit-learn for training machine learning models, SHAP for interpretability, and MLflow for monitoring models once they’re in production. Can you give us a sneak peak into a time you’ve seen Kedro used with tools like scikit-learn to drive real-world impact — what was the problem, and what was the outcome?</p><p><strong>Merel Theisen:</strong> One great example is a large Brazilian independent broker that had no formal data science practice when they started out. Their main challenge was a classic one: every data scientist built pipelines their own way, and the typical workflow meant shipping notebooks straight to production. They’d tried adopting tools like MLflow but couldn’t get adoption due to coding overhead.</p><p>The team adopted Kedro and it clicked for them because it met them where they were. It gave them standardized project structure, encouraged good software engineering practices, and let them think about models as proper software artifacts rather than one-off notebook experiments.</p><p>What’s interesting is what happened next. Once Kedro was in place as that foundational layer, adopting other MLOps tools became much easier. MLflow for experiment tracking, Great Expectations for data validation, these tools slotted in naturally because the team already had clean, structured pipelines to integrate them with.</p><p><strong>Marie Sacksick: </strong>When you develop new features for Kedro, how much do you prioritize interoperability with tools in the wider Python data science ecosystem? And going one step further, what does a healthily integrated Python data science ecosystem look like to you?</p><p><strong>Merel Theisen:</strong> Kedro is designed to be tool- and platform-agnostic, so it slots into existing data stacks easily. As a Python library, tools like pandas, scikit-learn, and LangChain work natively inside Kedro projects. We also offer hooks, plugins, and kedro-datasets, our community-driven data connectors, to extend functionality further. A healthy ecosystem, to me, is one where tools complement each other and users can leverage the best of each without friction.</p><p><strong>Marie Sacksick: </strong>At Probabl, we recently launched Skore Hub, a platform that extends our open source library Skore and enables data science teams to easily track, explore, and share their data science workflows. What value do you see Kedro and Skore, when used together, creating for enterprise data science teams?</p><p><strong>Merel Theisen:</strong> To me, Kedro and Skore address different but complementary stages of the data science workflow. Kedro provides the pipeline structure: how data flows, how code is organised, how projects scale. Skore, as I understand it, focuses on model development quality, such as evaluation reports, methodological diagnostics, and cross-validation insights. I think together they’d give enterprise teams both structured, reproducible pipelines and rigorous model evaluation with built-in best practices, which is exactly the combination needed to move from experimentation to production confidently.</p><p><strong>Marie Sacksick: </strong>Open source thrives on collaboration, yet many enterprise users are consumers rather than upstream contributors. Could you give us a sneak peak into how you and your team have successfully encouraged others to move from just using Kedro to actually contributing to it? Based on your learnings, what is your go-to advice for enterprises that steward core Python libraries for data science and AI?</p><p><strong>Merel Theisen:</strong> Before open-sourcing Kedro, we established strong internal standards around code quality and testing. The challenge was maintaining that bar without discouraging contributions. We invested in clear contribution guides, streamlined developer setup, and responsive PR reviews, as people shouldn’t be left waiting. We also created tiered contribution paths: kedro-datasets is an easy entry point, and our experimental dataset tier lowers the bar further, letting contributors share ideas without needing to fully polish them. My advice: make contributing feel achievable, respond quickly, and offer varied entry points for different commitment levels.</p><p><strong>Marie Sacksick: </strong>Last but not least, how would you pitch scikit-learn to CEOs who want to leverage the power of AI in their businesses?</p><p><strong>Merel Theisen:</strong> I’d pitch scikit-learn as the most battle-tested ML library in the Python ecosystem. It’s open source, widely adopted, and covers the vast majority of practical ML use cases. And naturally, it works seamlessly inside Kedro projects, so teams get structured pipelines with best-in-class ML tooling out of the box!</p><h3>About Merel Theisen</h3><p>Merel Theisen is a Principal Software Engineer at QuantumBlack, where she is currently the tech lead of Kedro, an open source project part of the Linux Foundation. Merel has over ten years of experience in the software industry, with most of her career focused on backend product engineering. Merel is passionate about building products that solve real user problems, and cares deeply about creating robust, well-tested software that follows good engineering principles. Merel is also a strong advocate for open source software, and finds working with the community to be both inspiring and energizing.</p><h3>For more from Probabl</h3><ul><li>Follow our latest updates on <a href="https://www.linkedin.com/company/probabl">LinkedIn</a></li><li>Subscribe to our <a href="https://hello.probabl.ai/subscribe-probabl-newsletter">monthly newsletter</a></li><li>Check out our technical explainer videos on <a href="https://www.youtube.com/@probabl_ai">YouTube</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=c4458616870f" width="1" height="1" alt=""><hr><p><a href="https://medium.com/probabl/the-open-source-tools-of-enterprise-data-science-a-conversation-with-merel-theisen-c4458616870f">The open source tools of enterprise data science: A conversation with Merel Theisen</a> was originally published in <a href="https://medium.com/probabl">:probabl.</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The value of certifying your machine learning skills: A conversation with Dr. Fabian Stephany]]></title>
            <link>https://medium.com/probabl/the-value-of-certifying-your-machine-learning-skills-a-conversation-with-dr-fabian-stephany-807c6c621bf2?source=rss-3364d04aa73a------2</link>
            <guid isPermaLink="false">https://medium.com/p/807c6c621bf2</guid>
            <category><![CDATA[education]]></category>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[python]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[data-science]]></category>
            <dc:creator><![CDATA[:probabl.]]></dc:creator>
            <pubDate>Tue, 24 Mar 2026 13:38:21 GMT</pubDate>
            <atom:updated>2026-04-14T16:51:57.897Z</atom:updated>
            <content:encoded><![CDATA[<h3>Arturo Amor, ML Engineer at Probabl and scikit-learn core contributor, interviews Dr. Fabian Stephany from the University of Oxford about the value of ML skills and certificationsThe value of certifying your machine learning skills: A conversation with Dr. Fabian Stephany</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*sDsB8FDMkT0mzE4uZQSzQA.png" /></figure><p><em>By </em><strong><em>Arturo Amor</em></strong><em>, ML Engineer at Probabl and scikit-learn core maintainer</em></p><p>We find ourselves in a period of profound uncertainty regarding the future of work. As software and AI redefine traditional workflows, policy makers and business executives alike are grappling with questions like: Which skills will remain relevant, and how do we build a workforce that is resilient to the next wave of disruption?</p><p>To find answers, we must move beyond speculation and look at the evidence. Dr. Fabian Stephany, an Assistant Professor in AI and Work at the University of Oxford, is at the forefront of this effort. Leading the multidisciplinary <a href="https://www.oii.ox.ac.uk/research/projects/skillscale/">SkillScale Project</a>, Dr. Stephany and his team use large-scale labor market data to provide empirical insights on how emerging technologies are reshaping work and the skills that are increasingly in demand.</p><p>In this post, I break down key findings from the SkillScale Project and share my conversation with Dr. Stephany about his latest insights on these questions.</p><h3>What the data says about the value of ML skills</h3><p>By analyzing millions of data points from online job vacancies and digital work platforms, Dr. Stephany and his team in the SkillScale Project have been digging into how the skill composition of professions is changing. I highlight three findings that stand out to me.</p><h3>The value of complementarity: AI and ML skills pay off</h3><p>In a 2024 <a href="https://www.sciencedirect.com/science/article/pii/S0048733323001828">research paper</a> published in the prestigious <em>Research Policy </em>journal, the researchers found that the value of a skill depends on its complementarity; that is, by the number, diversity, and value of skills it can be combined with. By analyzing nearly 50,000 freelance projects, the researchers found that high-value skills like data analytics derive their worth from their complementarity and function as a force multiplier when paired with others.</p><p>The researchers also found that by having general AI-related skills, professionals can earn 21% more than their peers without such skills. This includes AI-adjacent roles where the professional uses AI to enhance their primary job (e.g. a marketer using AI for content generation or a project manager using AI for forecasting).</p><p>For data scientists, this means that your professional resilience is built not by hyper-specialization per se, but by developing a diverse set of interlocking skills that create strategic options for the future. For executives, this underscores a strategic shift in how to build human capital in your enterprise: your most resilient employees are those whose skill sets are diverse enough to offer strategic options for future reskilling.</p><h3>Professionals with ML skills enjoy a 40% wage premium</h3><p>In the same <a href="https://www.sciencedirect.com/science/article/pii/S0048733323001828">paper</a>, the researchers identified a hierarchy of wage premiums based on the depth of a professional’s AI expertise. In particular, there is a significant wage premium for workers who have machine learning skills, who see a 40% increase in hourly wages. This specialized expertise represents the highest wage premium, followed by other types of AI skills such as deep learning (+27%) and natural language processing (+19%).</p><p>Crucially, this premium for machine learning skills is not confined to the tech sector. The researchers found that software and technical skills are often more valuable when applied in non-tech domains; for example, commanding ten times the value in Finance or Legal sectors compared to the Tech domain itself. This indicates that machine learning has become a key general-purpose skill, where the highest economic rewards go to those who can bridge the gap between technical execution and industry-specific application.</p><h3>Certified ML skills increase likelihood of landing an interview invitation</h3><p>In an experimental study involving over 1,700 recruiters across the US and UK, published in a 2026 <a href="https://arxiv.org/abs/2601.13286">working paper</a>, Dr. Stephany and his team found how AI skills impact hiring decisions in graphic design, administration, and software engineering.</p><p>The researchers found that having ML and AI skills in your resume increases the likelihood of landing an interview invitation by up to 15%. Notably, AI skills can act as a powerful equalizer, capable of offsetting traditional labor market disadvantages related to age or lower formal education. In addition, verifiable certificates for machine learning and AI skills–particularly those issued by recognized universities or companies–act as a credible hiring signal.</p><h3>A conversation with Dr. Fabian Stephany</h3><p>I sat down with Dr. Fabian Stephany to better understand his latest insights on in-demand skills and how data scientists can upskill to remain competitive in the evolving labor market.</p><p><strong>Arturo: </strong>Your research suggests that skills like data analysis and machine learning gain value when paired with others. For a data scientist today, what are the most underrated complementary skills that significantly boosts the market value of their technical expertise?</p><p><strong>Dr. Fabian Stephany:</strong> We certainly see a strong premium for AI related skills such as data analysis, machine learning, and increasingly the application of AI agents in business workflows. But at the same time, our recent research shows that so called human or soft skills are becoming more valuable as AI spreads through the workplace.</p><p>The reason is quite straightforward. As AI tools become better at handling repetitive technical tasks such as cleaning datasets, refactoring code, or drafting reports and emails, this frees up cognitive bandwidth for workers to focus on areas where humans still have a comparative advantage. These include things like ethical judgment, communication, and teamwork.</p><p>Interestingly, when we look at occupations where AI adoption is particularly strong, we also observe rising demand for exactly these kinds of human capabilities. So for technical professionals such as data scientists, it is important to think beyond purely technical development. Technical expertise remains essential, but the professionals who will benefit most from AI are those who combine it with strong collaborative skills, the ability to translate technical insights into business decisions, and a sense of responsible and ethical deployment of these technologies.</p><p>In other words, the future value of technical expertise increasingly lies in how well it is embedded in human judgment and collaboration.</p><p><strong>Arturo: </strong>In your 2026 working paper, you found that AI skills significantly increase interview invitations, even for non-technical roles like office assistants. Since claiming AI skills is becoming easier and more common, how can recruiters distinguish between a candidate who merely uses machine learning or AI tools and one who truly understands how to integrate them into professional workflows?</p><p><strong>Dr. Fabian Stephany:</strong> This question essentially comes down to signaling. Today it has become relatively easy to claim AI expertise, sometimes simply because someone knows how to write prompts or use a particular tool. For recruiters, that makes it increasingly difficult to distinguish between buzzwords and genuine capability.</p><p>In our research we conducted a large online experiment with more than 1700 recruiters. Interestingly, we find that even self reported AI skills already increase the probability of being invited to an interview. But the effect becomes significantly stronger when these skills are accompanied by credible credentials.</p><p>Micro credentials, often short courses lasting one or two weeks offered by trusted industry providers or universities, greatly strengthen the signal. Candidates who list such certified AI skills receive substantially more interview invitations. This effect is particularly strong for applicants who might otherwise face disadvantages in the labor market, such as older workers or candidates with lower levels of formal education.</p><p>So while AI skills are increasingly common claims, credible certification from trusted institutions remains a powerful way to separate genuine capability from simple buzz.</p><p><strong>Arturo: </strong>You’ve identified that employers are increasingly prioritizing practical AI skills over traditional degrees. Do you see a future where verifiable, hands-on certifications become the primary hiring signal for technical roles?</p><p><strong>Dr. Fabian Stephany:</strong> What we observe right now is a shift toward skill based hiring. Employers increasingly focus on specific capabilities rather than relying solely on traditional degrees as signals.</p><p>However, this does not mean that degrees such as bachelor’s or master’s programs have lost their value. In many cases universities simply have not yet scaled up programs that focus specifically on applied AI skills. As a result employers currently rely more heavily on direct signals of skills because strong academic credentials in these areas are still relatively scarce.</p><p>Micro credentials, short targeted training programs, are filling this gap at the moment. They provide a fast and credible way to signal practical capabilities.</p><p>In the longer run, however, I expect universities to adapt. As more structured degree programs emerge around AI applications and computational skills, traditional academic credentials will continue to play an important role. The likely future is not the replacement of degrees but rather a hybrid system in which formal education and verifiable skill credentials complement one another.</p><p><strong>Arturo: </strong>The finding that machine learning and AI skills are significantly more valuable in Finance or Legal than in Tech is interesting. Why does the translation of AI expertise into traditional industries command such a high premium?</p><p><strong>Dr. Fabian Stephany:</strong> One explanation is the difference in technological maturity across sectors.</p><p>Many technical professions such as software engineering, machine learning engineering, or data science have already integrated forms of AI and advanced analytics for many years. Even before the recent wave of generative AI, these roles were already using machine learning methods and automation tools to optimize workflows. In other words, much of the productivity premium from these technologies has already been captured in the tech sector.</p><p>In contrast, sectors such as finance, legal services, or management are still in the earlier stages of adopting these technologies. Here the potential efficiency gains are often much larger. A lawyer, financial analyst, or manager who effectively integrates AI into their workflow may still see very substantial productivity improvements.</p><p>So the higher premium reflects the fact that AI adoption in these sectors is still catching up and therefore the marginal impact of AI expertise can be particularly large.</p><p><strong>Arturo: </strong>Looking ahead to 2030, what is your prognosis for the sectors where skills in machine learning and AI will be the most impactful?</p><p><strong>Dr. Fabian Stephany:</strong> Forecasts about the future of work tend to age notoriously badly, so I am cautious about making very precise predictions.</p><p>What we can say, however, is that AI has two distinct channels through which it creates value.</p><p>The first is efficiency gains, making existing processes faster, cheaper, and more reliable. In this dimension there is still enormous untapped potential, especially in small and medium sized enterprises where digital transformation is often still incomplete.</p><p>The second channel is genuine innovation, the creation of entirely new products, services, or scientific discoveries. This is much harder to predict.</p><p>To draw an analogy from the Industrial Revolution, early on we used steam power to improve existing processes such as mechanizing textile production. The real breakthrough came later when the steam engine was put on rails and created the railway system. That fundamentally transformed the economy.</p><p>With AI we are still largely in the phase of improving existing processes. The real transformative innovations are still ahead of us. One sector where this may become particularly visible is pharmaceuticals and biotechnology, where AI could dramatically accelerate the discovery of new drugs and treatments.</p><h3>Learn more about the SkillScale Project</h3><p>For a deeper dive into research findings from the SkillScale Project, explore the following resources:</p><p><strong>📺 Watch</strong></p><ul><li><a href="https://www.bbc.com/videos/cp9m4l0jvk1o">How AI can actually boost your chances of finding a new job</a>. Worried AI is going to steal your job? In this BBC explainer video, Dr. Fabian Stephany explains how AI skills can in fact be an ally when it comes to finding a new role.</li><li><a href="https://www.linkedin.com/posts/fabianstephany_ai-futureofwork-hiring-activity-7419287999730880512-M8VY?utm_source=share&amp;utm_medium=member_ios&amp;rcm=ACoAABRNgu0BbMNu7gnqkFd65chSBUwqigx3Ktc">AI Skills Improve Job Prospects</a>. In this LinkedIn Short, Dr. Fabian Stephany explains the key findings from his 2026 paper, <a href="https://arxiv.org/abs/2601.13286">“AI Skills Improve Job Prospects”</a>.</li><li><a href="https://www.youtube.com/watch?v=UINAsR_ugJk">Code-Based Colleagues: The Future of Work and AI</a>. This micro-documentary by Oxford Sparks provides an overview of how data-driven reskilling can create sustainable jobs.</li><li><a href="https://www.micro1.ai/forum/reskilling-in-the-age-of-ai">Reskilling in the Age of AI</a>. A panel discussion hosted by micro1 and Microsoft AIEI on the shifting requirements of the global workforce.</li><li><a href="https://www.oii.ox.ac.uk/news-events/videos/ais-ripple-effect-on-skills-and-labour-markets/">AI’s Ripple Effect on Skills and Labor Markets</a>. Watch Dr. Fabian Stephany’s webinar lecture at Saïd Business School at the University of Oxford, detailing two years of SkillScale research findings.</li></ul><p><strong>📚 Read</strong></p><ul><li><a href="https://arxiv.org/abs/2601.13286">AI Skills Improve Job Prospects: Causal Evidence from a Hiring Experiment</a></li><li><a href="https://onlinelibrary.wiley.com/doi/abs/10.1002/9781394413096.ch18">AI Skills Wanted: How AI Technologies Create Demand for Skilled Workers</a></li><li><a href="https://arxiv.org/abs/2412.19754">Complement or substitute? How AI increases the demand for human skills</a></li><li><a href="https://www.sciencedirect.com/science/article/pii/S0040162525000733">Skills or degree? The rise of skill-based hiring for AI and green jobs☆</a></li><li><a href="https://www.sciencedirect.com/science/article/pii/S0048733323001828">What is the price of a skill? The value of complementarity</a></li></ul><h3>About Dr. Fabian Stephany</h3><p>Fabian Stephany is an Assistant Professor in AI and Work at the Oxford Internet Institute (OII), University of Oxford and a<a href="https://www.inet.ox.ac.uk/people/fabian-stephany"> Senior Research Fellow</a> with the Institute for New Economic Thinking at the Oxford Martin School. He is also a Future of Work fellow at the Brussels-based think tank <a href="https://www.bruegel.org/people/fabian-stephany">Bruegel</a>, an inaugural fellow at <a href="https://www.microsoft.com/en-us/research/group/ai-for-good-research-lab/ai-economy-institute/">Microsoft’s AI Economy Institute</a>, and a research affiliate at the <a href="https://www.hiig.de/en/fabian-stephany-dr/">Humboldt Institute</a> for Internet and Society in Berlin. Additionally, he currently serves as a member of the World Economic Forum’s <a href="https://initiatives.weforum.org/global-future-council-on-human-capital-development/home">Global Future Council</a> for Human Capital Development.</p><p>At the OII, Fabian leads the <a href="https://www.oii.ox.ac.uk/research/projects/skillscale/">SkillScale project</a>, which views skills as a central lens through which to understand today’s labour market transitions. By examining how work quality, job growth, and labour market equitability and sustainability respond to technological change, the project investigates how AI skills are becoming increasingly pivotal for workers and employers alike. As part of his Microsoft fellowship, Fabian is currently exploring the role of AI skills in employability — particularly how working with generative AI enhances job prospects and addresses the gender gap between men and women.</p><p>Fabian is also a co-creator of the <a href="http://onlinelabourobservatory.org/">Online Labour Observatory</a>–a digital data hub hosted in collaboration with the International Labour Organization that provides researchers, policymakers, journalists, and the public with insights into online platform work. His research has been published in <a href="https://scholar.google.de/citations?hl=com&amp;user=guvrufQAAAAJ">leading academic journals</a>, such as Research Policy and Scientific Reports, and has received media coverage in <a href="https://www.oii.ox.ac.uk/people/profiles/fabian-stephany/#news_press">outlets around the world</a>, including The Washington Post, The New York Times, The Telegraph, Nikkei Asia, Handelsblatt, and the Frankfurter Allgemeine Zeitung.</p><h3>Learn more about Skolar by Probabl</h3><p>The creators and maintainers of scikit-learn created the Inria MOOC <a href="https://www.fun-mooc.fr/en/courses/machine-learning-python-scikit-learn/">“Machine learning in Python with scikit-learn,”</a> as per the open source philosophy to empower everyone, everywhere, with free knowledge.</p><p>Since 2025, this <a href="https://blog.scikit-learn.org/updates/probabl-skolar/">mission</a> is shared and actively supported by Probabl. The MOOC hasn’t stopped evolving to adapt to new best practices, new functionalities in scikit-learn, and most importantly serve as preparation to get hands-on experience in a world where code generated by agents has to be validated by capable humans in the loop.</p><p>That’s where <a href="https://skolar.probabl.ai/en/a/2592619793993357212;p=1;pa=0">Skolar</a> comes into play.</p><p>Learn more about Probabl’s <a href="https://skolar.probabl.ai/en/a/2592619793993357212;p=1;pa=0">free educational materials</a> for machine learning with Python and our official <a href="https://probabl.ai/certification">Skolar certifications</a>.</p><h3>For more from Probabl</h3><ul><li>Follow our latest updates on <a href="https://www.linkedin.com/company/probabl">LinkedIn</a></li><li>Subscribe to our <a href="https://hello.probabl.ai/subscribe-probabl-newsletter">monthly newsletter</a></li><li>Check out our technical explainer videos on <a href="https://www.youtube.com/@probabl_ai">YouTube</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=807c6c621bf2" width="1" height="1" alt=""><hr><p><a href="https://medium.com/probabl/the-value-of-certifying-your-machine-learning-skills-a-conversation-with-dr-fabian-stephany-807c6c621bf2">The value of certifying your machine learning skills: A conversation with Dr. Fabian Stephany</a> was originally published in <a href="https://medium.com/probabl">:probabl.</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[scikit-learn acceleration with GPUs: A conversation with Dr. Andy Terrel]]></title>
            <link>https://medium.com/probabl/scikit-learn-acceleration-with-gpus-a-conversation-with-dr-andy-terrel-fe7c277091c4?source=rss-3364d04aa73a------2</link>
            <guid isPermaLink="false">https://medium.com/p/fe7c277091c4</guid>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[data-science]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[python]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <dc:creator><![CDATA[:probabl.]]></dc:creator>
            <pubDate>Tue, 17 Mar 2026 14:10:31 GMT</pubDate>
            <atom:updated>2026-03-26T08:02:10.238Z</atom:updated>
            <content:encoded><![CDATA[<h3>Scikit-learn acceleration with GPUs: A conversation with Dr. Andy Terrel</h3><figure><img alt="scikit-learn acceleration with GPUs: Gaël Varoquaux interviews NVIDIA’s Dr. Andy Terrel" src="https://cdn-images-1.medium.com/max/1024/1*aOMyVgnJiKPPwROoLNXg_w.png" /></figure><p><em>By </em><strong><em>Gaël Varoquaux</em></strong><em>, CSO of Probabl</em></p><p>For over a decade, scikit-learn has served as the bedrock of machine learning, supporting the work of millions of data scientists worldwide. While scikit-learn was originally designed for a CPU-centric world, the advent of new hardware presents an opportunity to supercharge machine learning pipelines.</p><p>Speeding up machine learning workflows isn’t just about technical benchmarks; it’s about turning hours of training into seconds, saving time and money for data scientists and enterprises.</p><p>I sat down with Dr. Andy Terrel from NVIDIA to discuss why this community effort is such a game-changer for the scientific Python ecosystem and enterprise data science.</p><h3>Bringing you up to speed</h3><p>To achieve GPU acceleration in scikit-learn without fragmenting our codebase, we are re-engineering scikit-learn to be backend agnostic. This is a mighty community effort involving our team at Probabl, our peers at Quansight and NVIDIA, and many others from the wider community.</p><p>Historically, supporting GPUs required specialized code for every library, but the array API provides a unified specification that allows scikit-learn to remain flexible. Now, when an estimator is array API-compliant, it can inspect your input data–whether it’s a PyTorch tensor or a CuPy array–and delegate the computation to the matching library’s optimized functions. If your data lives on the GPU, the computation stays on the GPU, avoiding expensive memory transfers.</p><p>We have already updated 25 estimators and core tools like the scoring API, ensuring they perform consistently across different hardware backends through rigorous automated testing. The real-world impact of this work is significant; for example, recently <a href="https://blog.probabl.ai/accelerating-scikit-learn-with-gpus-achieving-15x-speed-ups-in-complex-ml-pipelines-with-the-python-array-api">Olivier Grisel</a>, my fellow scikit-learn core maintainer and ML Engineer at Probabl, demonstrated a 15x speed-up in complex machine learning pipelines by offloading compute-intensive steps to the GPU.</p><p>For a deeper dive into the technical implementation and the latest progress, I highly recommend reading the detailed technical updates by my colleague <a href="https://blog.probabl.ai/accelerating-scikit-learn-with-gpus-achieving-15x-speed-ups-in-complex-ml-pipelines-with-the-python-array-api">Olivier</a> and<a href="https://labs.quansight.org/blog/array-api-scikit-learn-2026"> Lucy Liu</a> from Quansight.</p><h3>A conversation with Dr. Andy Terrel</h3><p><strong>Gaël Varoquaux: </strong>Andy, for someone coming into this with zero context: why are we working so hard to accelerate Python libraries like scikit-learn with GPUs?</p><p><strong>Andy Terrel: </strong><em>Two main reasons. The forward march of computing technology and the growing needs of science in the AI era. When I started programming computers, multi-threaded programs were rare. HPC centers had wonderful CPUs that could manage 2 threads, but today your phone has 6 cores. What is leading edge in the HPC center will come to commodity systems in time. GPUs are a must in any data center today, and most people are seeing them deployed in their commodity hardware as well for smaller scale simulations. The other trend is incorporating AI into the scientific workload, we see scientists needing to use tools like scikit-learn to do ensembles of models or build model regressors. By having our beloved Python scientific tools work on the GPU, we allow scientists to be more efficient and utilize the GPU as part of the full application rather than an occasionally used offloading device.</em></p><p><strong>Gaël Varoquaux: </strong>At your recent webinar, <a href="https://www.nvidia.com/en-eu/events/python-on-the-gpu-webinar/?_hsmi=2">“Python on the GPU: From Libraries to Kernels,”</a> my colleague and co-maintainer of scikit-learn, Olivier Grisel, spoke about our efforts to adopt the Array API and showcased the benefits of GPU-acceleration for the data scientist working with scikit-learn. For example, he showed that using GPUs instead of CPUs results in a 15x speed-up when tuning hyperparameters in a complex machine learning pipeline. This is fantastic but there’s still much work to be done in scikit-learn and of course in many other Python libraries to enable GPU acceleration. From your perspective, how do you think the ecosystem could evolve to make such an endeavor easier?</p><p><strong>Andy Terrel: </strong>The Array API is a big step in allowing code to seamlessly integrate with GPUs, but code such as NumPy and SciPy are slow to fully convert their core routines into it. It takes time to move such code bases, but we are pointed in the correct direction. The Array API only takes a code so far, as most code I work with are also needing to adopt the correct array interfaces. The array interfaces, e.g. c_array_interface of NumPy or cuda_array_interface of Numba, help integrate with other native codes. In this vein the DLPack system has become essential to provide an interface that recognizes the different devices and will avoid memory movement as needed. While some tools have adopted these APIs and interfaces, we still have work to do to expand tooling for the scientific ecosystem. For example, pure Python applications have wonderful tools like pyrefly and ty for type inference but scientific code can rarely use them because extension types are not cleanly represented in Python type syntax.</p><p><strong>Gaël Varoquaux:</strong> Open source communities appreciate choice in software and hardware. How do tools like CuPy, Numba, and the Python Array API help open source maintainers and users navigate the balance between achieving maximum performance while maintaining a healthy, backend-agnostic ecosystem?</p><p><strong>Andy Terrel: </strong>My viewpoint is that we should build high level open tools for the 80% cases and then let users determine if they want to specialize to hardware for the extra 20%. There are many cases where an extra 20% of performance is crucial, but for many it is not. The transition of a code based on NumPy can be quickly ported to CuPy. From there, if a routine needs a more finely tuned GEMM or FFT, nvmath-python provides bindings to highly optimized CPU and GPU libraries. These optimized libraries are hard to maintain so letting vendors provide them and OSS communities can focus on choice rather than optimal performance.</p><p><strong>Gaël Varoquaux: </strong>Thanks to GPU acceleration, we can move from minutes to seconds when training models. It’s a win for data scientists, who get to be more productive and remain in the flow. And it’s a win for enterprises, who get to save time and money. What’s your prognosis of how these wins will materialize for enterprise data science teams? Which sectors do you think will reap the greatest rewards?</p><p><strong>Andy Terrel: </strong>I’ve seen some big wins with scientific instruments. Grid Computing was invented because High Energy Physics needs to offload experimental data and process it. Today scientific workflows that took weeks to process can be done in minutes. When you have instruments that have configurable sensors (and all the big ones do these days), this means scientists can have more control over experiments as data is processed faster and models can be updated on the fly. This sort of acceleration directly translates to industrial processes, robotics, self driving cars, etc. I spent time in the manufacturing space before coming to NVIDIA and we were already seeing better yields and faster times from prototype to production with machining.</p><p><strong>Gaël Varoquaux: </strong>Most people associate GPUs with LLMs. How do we raise awareness that GPUs are also game-changing for machine learning with tabular data — the type of data that most enterprises actually run on?</p><p><strong>Andy Terrel:</strong> In my career as a data scientist, I would advise companies to evaluate the speed of decision making with the technology they choose. If a company requires emails of spreadsheets and weekly meetings, it would take 2–4 weeks for decisions to be made. If there was a dashboard with APIs but daily standups, we would see operations changing in 2–4 days. Both these cases are essentially bringing tabular data in front of decision makers, and require careful analysis and subject matter insight to decipher. Now if we can get tabular data to be instant and correct the first time, no more arguing about domain models, then we can see business adapt to the market in near real time. This is scary to business leaders, they like their spreadsheets so there needs to be a phased introduction of the tooling to help transform business. Unfortunately, I don’t know that optimizing enterprises has ever been something seen as cool, but operational efficiency will drive leaders to better results and the tabular data model will be its heart.</p><p><strong>Gaël Varoquaux: </strong>Looking ahead, do you envision a world where the distinction between “CPU code” and “GPU code” in the Python stack disappears entirely for data scientists?</p><p><strong>Andy Terrel: </strong>Today, I work with data centers that have LPUs and quantum chips as well. The essential challenge is that the programming model is so different between these different chips. With AI agents, we are seeing some transfer between GPU and CPU code, but the two code paths still need to be managed differently for efficiency. High core count CPUs may get to a point where the memory hierarchy of the GPU starts getting built in, but I’m a software person and I really don’t know the complexities there.</p><p><strong>Gaël Varoquaux:</strong> Looking ahead again, what are you the most excited about when it comes to making our favorite Python libraries for data science run on GPUs?</p><p><strong>Andy Terrel: </strong>I’m most excited about scientific discoveries. The further enabling of weather predictions, nuclear fission, and astronomical discoveries are all using GPUs today. Tools for scientific data analysis are incorporating AI and machine learning by default, this allows researchers to focus on the important aspects of science and perform more surveys to validate before experimentation.</p><h3>Learn more about scikit-learn acceleration</h3><h4>🖲️ Demo</h4><ul><li>Test the GPU speed-ups in this <a href="https://colab.research.google.com/drive/1YrCt5iBPT6gnmp7geahRn_9OqCPrfoLb?usp=sharing">demo</a> made by Olivier Grisel, ML Engineer at Probabl and scikit-learn core maintainer.</li></ul><h4>📺 Watch</h4><ul><li>Olivier Grisel, ML Engineer at Probabl and core maintainer of scikit-learn, demoed a 15x speed-up in a complex ML pipelines in Nvidia’s <a href="https://www.nvidia.com/en-eu/events/python-on-the-gpu-webinar/">“Python on the GPU: From Libraries to Kernels”</a> webinar in February 2026.</li></ul><h4>📚 Read</h4><ul><li>Adrin Jalali, VP of Labs at Probabl and scikit-learn core maintainer (March 11, 2026): <a href="https://blog.probabl.ai/scikit-learn-roadmap-11-march-2026">Current scikit-learn priorities at Probabl — March 2026 edition</a></li><li>Olivier Grisel, ML Engineer at Probabl and scikit-learn core maintainer (March 10, 2026): <a href="https://blog.probabl.ai/accelerating-scikit-learn-with-gpus-achieving-15x-speed-ups-in-complex-ml-pipelines-with-the-python-array-api">Scikit-learn acceleration with GPUs</a></li><li>Lucy Liu, Software Engineer at Quansight labs and scikit-learn core maintainer (March 4, 2026): <a href="https://labs.quansight.org/blog/array-api-scikit-learn-2026">Update on array API adoption in scikit-learn</a></li></ul><h3>About Dr. Andy Terrel</h3><p>Andy leads CUDA Python from the product management team. His research focused on domain-specific languages to generate high-performance code for physics simulations with the PETSc and FEniCS projects. Andy is a leader in the Python open-source software community. He’s most notably a co-creator of the Dask distributed computing framework, the Conda package manager, the SymPy symbolic computing library, and NumFOCUS foundation.</p><h3>For more from Probabl</h3><ul><li>Follow our latest updates on <a href="https://www.linkedin.com/company/probabl">LinkedIn</a></li><li>Subscribe to our <a href="https://hello.probabl.ai/subscribe-probabl-newsletter">monthly newsletter</a></li><li>Check out our technical explainer videos on <a href="https://www.youtube.com/@probabl_ai">YouTube</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=fe7c277091c4" width="1" height="1" alt=""><hr><p><a href="https://medium.com/probabl/scikit-learn-acceleration-with-gpus-a-conversation-with-dr-andy-terrel-fe7c277091c4">scikit-learn acceleration with GPUs: A conversation with Dr. Andy Terrel</a> was originally published in <a href="https://medium.com/probabl">:probabl.</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Current scikit-learn priorities at Probabl — March 2026 edition]]></title>
            <link>https://medium.com/probabl/current-scikit-learn-priorities-at-probabl-march-2026-edition-dff0720c35c0?source=rss-3364d04aa73a------2</link>
            <guid isPermaLink="false">https://medium.com/p/dff0720c35c0</guid>
            <category><![CDATA[data-science]]></category>
            <category><![CDATA[python]]></category>
            <category><![CDATA[open-source]]></category>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <dc:creator><![CDATA[:probabl.]]></dc:creator>
            <pubDate>Wed, 11 Mar 2026 11:08:50 GMT</pubDate>
            <atom:updated>2026-03-16T16:16:36.524Z</atom:updated>
            <content:encoded><![CDATA[<h3>Current scikit-learn priorities at Probabl — March 2026 edition</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*czTKD6FvQnrmVuZtyBsSFQ.png" /></figure><p><em>By </em><strong><em>Adrin Jalali</em></strong><em>, VP of Labs at Probabl and core maintainer of scikit-learn</em></p><p>At Probabl, we have a significant support for scikit-learn’s maintenance and development, and it is important to us to communicate our priorities for the broader community to know where our activities will be focused in the short term and what to expect.</p><p>We also aspire to maintain an up to date project board for each of these topics to keep track of the progress.</p><h3>Priorities</h3><h3>1. GPU Support via Array API</h3><ul><li>Moving from pure numpy to Array API enables using a diverse set of hardware (including GPUs) as well as native support of computations done by different backends such as pytorch or cupy. This has been a long running project, and many estimators and functions in scikit-learn already support this, but there’s still a lot of work to be done.</li><li>Project board: <a href="https://github.com/orgs/scikit-learn/projects/12">https://github.com/orgs/scikit-learn/projects/12</a></li><li>This work is also supported by the NASA ROSES grant.</li></ul><h3>2. Callbacks</h3><ul><li>This work enables progress reports notably in estimators such as GridSearchCV as well as inspection of estimators as they go through their iterative training process. We will also work on the related SLEP to get the required consensus and move this forward.</li><li>Project board: <a href="https://github.com/orgs/scikit-learn/projects/8/views/2">https://github.com/orgs/scikit-learn/projects/8/views/2</a></li><li>This work is also supported by a CZI-Wellcome Trust grant</li></ul><h3>3. Tree based models</h3><ul><li>Tree based models are some of our most used estimators, and it’s important that we give the best we can to our users when it comes to these models. For this, we’d work on a variety of issues to improve them, e.g. merging Hist Gradient Boosting with Gradient Boosting.</li><li>Project board: <a href="https://github.com/orgs/scikit-learn/projects/26/views/1">https://github.com/orgs/scikit-learn/projects/26/views/1</a></li></ul><h3>4. Displays and UX</h3><ul><li>This section addresses all work related to the look and feel of what users see from scikit-learn, which includes estimator visualisations, displays as well as what’s provided on the website.</li><li>Project board: <a href="https://github.com/orgs/scikit-learn/projects/10">https://github.com/orgs/scikit-learn/projects/10</a></li><li>Project board: <a href="https://github.com/orgs/scikit-learn/projects/9/views/2">https://github.com/orgs/scikit-learn/projects/9/views/2</a></li><li>This work is also supported by a CZI-Wellcome Trust</li></ul><h3>5. Metadata Routing</h3><ul><li>This is another long running project, which is already in a shape which enables many common usecases, however, there are areas to improve before it can become the default in the library.</li><li>Project board: <a href="https://github.com/orgs/scikit-learn/projects/4">https://github.com/orgs/scikit-learn/projects/4</a></li></ul><h3>6. Misc / Maintenance / Release</h3><p>Other areas where we keep our activity include:</p><ul><li>Project maintenance: it’s always crucial to maintain the project and enable other contributors to move forward their projects and we dedicate a fair amount of resources to this area.</li><li>Free-threaded is an area supported by the NASA ROSES grant which includes maintenance of the build, as well as identifying thread safety or oversubscription issues.</li><li>Supply chain security is an area also supported by the NASA ROSES grant, which can result in some CI refactoring and improvements in our build process.</li></ul><h3>Labs @Probabl Project Board</h3><p>We also have a board to keep track of <a href="https://github.com/orgs/probabl-ai/projects/8/views/1">https://github.com/orgs/probabl-ai/projects/8/views/1</a> to view all active issues in our Labs team. Internally we assign a “champion” to each issue or pull request, which means that person is either the author or follows up on the work and makes sure the work moves forward. Whenever necessary, we also assign reviewer 1 and reviewer 2, if that’s lacking.</p><p>People mentioned in that board as a champion or a reviewer are either folks at Probabl or work very closely with us.</p><p>Note that the board includes all work done by our team on public repositories, which means not every entry is from the scikit-learn repo. Some entries are from other open source projects we support, such as skore-lib, skrub, and skops.</p><h3>For more from Probabl</h3><ul><li>Follow our latest updates on <a href="https://www.linkedin.com/company/probabl">LinkedIn</a></li><li>Subscribe to our <a href="https://hello.probabl.ai/subscribe-probabl-newsletter">monthly newsletter</a></li><li>Check out our technical explainer videos on <a href="https://www.youtube.com/@probabl_ai">YouTube</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=dff0720c35c0" width="1" height="1" alt=""><hr><p><a href="https://medium.com/probabl/current-scikit-learn-priorities-at-probabl-march-2026-edition-dff0720c35c0">Current scikit-learn priorities at Probabl — March 2026 edition</a> was originally published in <a href="https://medium.com/probabl">:probabl.</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[scikit-learn acceleration with GPUs]]></title>
            <link>https://medium.com/probabl/scikit-learn-acceleration-with-gpus-achieving-15x-speed-ups-in-complex-ml-pipelines-with-the-04f73b6d137f?source=rss-3364d04aa73a------2</link>
            <guid isPermaLink="false">https://medium.com/p/04f73b6d137f</guid>
            <category><![CDATA[data-science]]></category>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[python]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <dc:creator><![CDATA[:probabl.]]></dc:creator>
            <pubDate>Tue, 10 Mar 2026 15:31:57 GMT</pubDate>
            <atom:updated>2026-03-20T14:42:12.090Z</atom:updated>
            <content:encoded><![CDATA[<h3>Scikit-learn acceleration with GPUs</h3><figure><img alt="By Olivier Grisel, ML Engineer at Probabl &amp; scikit-learn core maintainer" src="https://cdn-images-1.medium.com/max/1024/1*zGHNORe34aZfHKDF1s_s3g.png" /></figure><p><em>By </em><strong><em>Olivier Grisel</em></strong><em>, ML Engineer at Probabl &amp; scikit-learn core maintainer</em></p><p>For over a decade, scikit-learn has served as the bedrock of machine learning, supporting the work of millions of data scientists worldwide and recently surpassing 4 billion downloads [1]. Scikit-learn was originally designed for a CPU-centric world, relying heavily on the foundational stack of NumPy, SciPy, and Cython. However, with the advent of new hardware, there are new opportunities to accelerate machine learning pipelines with scikit-learn.</p><p>Speeding up machine learning pipelines is significant for enterprises, where compute bottlenecks are not only a technical lag but also a barrier to operational agility. When model training takes hours instead of minutes, the time-to-insight stretches, delaying the impact of data science projects as a consequence. Even for smaller datasets, when model training takes minutes instead of seconds, interactive model development in notebooks stops being interactive, disrupting the quick iteration cycle of focussed data scientists and their productivity as a result.</p><p>In this post, I bring you up to speed on our efforts to adopt the Python array API standard in scikit-learn in order to tackle this problem and to facilitate hardware acceleration in data science workflows. An important point to emphasize is that this transition is not only a performance optimization; it is a fundamental re-engineering that allows data scientists to leverage scikit-learn’s 200+ estimators, while delegating performance-critical tasks to GPU-backed libraries like PyTorch, CuPy, and maybe soon JAX that unlock game-changing speed-ups for data scientists building complex machine learning pipelines.</p><h3>What is the Python array API standard?</h3><p>Historically, library maintainers like us at scikit-learn faced a vendor lock-in challenge. If we wanted to support GPUs, we would have had to write specialized code paths for every specific backend (e.g., one for NumPy, one for CuPy, another for PyTorch). This led to fragmented codebases and maintenance overhead.</p><p>The Python array API standard [2] solves this by providing a unified specification for NumPy-like operations. It is a common language adopted by major array libraries. By targeting this specification, scikit-learn can remain “backend agnostic.”</p><p><strong>Core Concept:</strong> When an estimator is array API-compliant, it inspects the input data. If you pass a PyTorch tensor residing on an NVIDIA GPU, scikit-learn uses the array API to dispatch the underlying linear algebra to PyTorch’s GPU kernels. The computation happens on the device where the data lives.</p><h3>Converting from NumPy to the array API</h3><p>Converting a library as vast as scikit-learn–which has over 200 estimators–is a significant undertaking. Indeed, whenever an estimator is converted, we also set up automated testing to ensure that it numerically behaves consistently across backends. This is a multi-year effort involving deep collaboration between Probabl, Quansight, NVIDIA, and the broader scientific Python community.</p><p>So far, approximately 25 estimators out of 200 are either partially, fully compatible or in the final stages of integration. Most metric functions (e.g. R2, log loss, Brier score) and tools such as cross-validation functions and the scoring API have been updated. Specific tests and continuous integration configuration has also been put in place to regularly monitor the correct execution of those components on a GPU and more test infrastructure work is in progress.</p><p>To be a bit more precise, let me explain some of the technical changes involved in converting from NumPy to the array API.</p><p>Before, the code would explicitly import NumPy (as “np”) perform linear algebra operations on NumPy arrays passed as input to the scikit-learn functions. Now, compliant functions accept any array API-compliant input without any explicit hard dependencies on those libraries: the underlying module is retrieved (as “xp”) by inspecting the input arguments. Subsequent linear algebra operations are therefore delegate to input-specific libraries without having to couple the source code explicitly to any of those array libraries.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*Ik7eDKQfGnu9Uez_.png" /></figure><p>In practice, not all array API compatible libraries are 100% compliant with the specification (yet) and importing <a href="https://github.com/data-apis/array-api-compat">array_api_compat</a> is a pragmatic way to handle the transition. For instance, PyTorch implements some features from the spec under different names. So instead of retrieving the array namespace from PyTorch, we ask array_api_compat to get a standard compliant PyTorch wrapper. If the input array stem from a compliant library, array_api_compatc simply returns that module as is.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*f6k8ylpPqv_lG-oc.png" /></figure><p>On top of this, <a href="https://github.com/data-apis/array-api-extra">array-api-extra</a> brings extra benefits that go beyond the spec and enable support for other libraries with special design constraints, such as JAX.</p><p><strong>The value-add for the data scientist: </strong>The significance of this work for the millions of data scientists around the world who use scikit-learn lies in the seamless scalability that has been unlocked. In the past, moving a scikit-learn pipeline to a GPU required a complete rewrite using different libraries. With the array API, this transition is possible. You can now tell scikit-learn to delegate compute intensive work to GPU-aware, array API-compliant libraries.</p><h3>Demo: 15x speed-up for complex ML pipelines with GPUs</h3><p>To illustrate the impact of this work, I measured the time it takes to fit and evaluate the following multistep polynomial regression pipeline:</p><p>poly_reg_torch_gpu = make_pipeline(</p><p>SplineTransformer(n_knots=5),</p><p>FunctionTransformer(partial(torch.asarray, device=&quot;cuda&quot;)),</p><p>Nystroem(kernel=&quot;poly&quot;, degree=2, n_components=300, random_state=0),</p><p>Ridge(solver=&quot;svd&quot;, alpha=1e-3),</p><p>)</p><p>cv_results_torch_gpu = cross_validate(</p><p>poly_reg_torch_gpu, X, y_torch_gpu, cv=5</p><p>)</p><p>In the above code, the SplineTransformer has not yet been updated to accept array API inputs while the other steps did. To upgrade this pipeline, we therefore insert a FunctionTransformer step to call torch.asarray(out, device=”cuda”) on the output of the first step before passing the resulting PyTorch GPU array to the Nystroem step and dramatically accelerate the last to steps by letting them operate on the CUDA device.</p><p>By offloading these steps to a GPU using the array API, I observed a <strong>15x speed-up</strong> compared to traditional CPU execution.</p><p><a href="https://colab.research.google.com/drive/1YrCt5iBPT6gnmp7geahRn_9OqCPrfoLb?usp=sharing">Google Colab</a></p><p><strong>Takeaway: </strong>Thanks to GPU acceleration, we can now tune the hyperparameters in a complex pipeline to get a very good model in the time it would take to run a single cross-validation on the Google Colab CPU. More importantly, the training speed is fast enough to avoid disrupting the model development flow of the data scientist interactively editing the Google Colab notebook.</p><h3>Deep dive with NVIDIA</h3><p>I recently had the pleasure of joining NVIDIA experts Andy Terrel, Sergey Maydanov, Ashwin Srinath, and Leo Fang for a technical deep dive into the CUDA Python roadmap and the adoption of the Python array API in scikit-learn.</p><p>We discussed topics like strategies for making GPU-accelerated computing more seamless and accessible for Python developers and data scientists.</p><p>We had over 700 people tune in from all over the world. If you missed the live event, I encourage you to watch the replay to see the live demo and the array API in action.</p><figure><img alt="Probabl presents scikit-learn acceleration at NVIDIA Webinar “Python on the GPU: From Libraries to Kernels”" src="https://cdn-images-1.medium.com/max/1024/1*sJKms991unPDiXlOVzkH8w.png" /><figcaption><a href="https://www.nvidia.com/en-eu/events/python-on-the-gpu-webinar/"><strong>Watch the Webinar Replay</strong></a></figcaption></figure><p>By bridging the gap between the easy-to-use and familiar interface of Python libraries and the power of GPUs, we are lowering the barrier to entry for high-performance AI, making it a practical reality for enterprises of all sizes and skills.</p><h3>Probabl on stage at Nvidia GTC 2026</h3><p>Gaël Varoquaux, our CSO, and Yann Lechelle, our Executive President, will be at NVIDIA GTC 2026 in San José next week. Don’t be a stranger; connect with them there!</p><p>On March 17 (3:00 PM — 3:40 PM PDT / 11:00 PM — 11:40 PM CET), Gaël will be speaking on the<a href="https://www.nvidia.com/gtc/session-catalog/sessions/gtc26-s82113/?_hsmi=2"> “Accelerating Open Science: Incorporating CUDA Into the SciPy Ecosystem”</a> panel. Gaël will discuss adopting CUDA in scikit-learn without sacrificing usability, portability, or community values alongside Leo Fang, Ianna Osborne, Travis Oliphant, and Katrina Riehl.</p><p>On March 18 (5:00 AM — 5:50 AM PDT / 1:00 PM — 1:50 PM CET), Yann will be speaking on the <a href="https://www.nvidia.com/gtc/session-catalog/sessions/gtc26-s81898/?ncid=ref-inc-898673">“Europe’s AI Launchpad: Unlock Startup Growth Through Sovereign AI Infrastructure [S81898]”</a> panel. Yann will discuss dynamic AI compute landscape as well as public and private compute options for startups alongside Cedric Auliac, Pierre-Antoine Beaudoin, and Sadaf Alam.</p><h3>Words of gratitude</h3><p>Our efforts to adopt the array API is the result of a massive team effort. I want to extend my gratitude to the maintainers and contributors from Quansight, NVIDIA, and the community of scikit-learn core contributors. The work performed by Probabl and Quansight on scikit-learn and SciPy is supported by the NASA ROSES grant 80NSSC25K7215 “Ensuring a fast and secure core for scientific Python.” This support is vital for maintaining the health of the open source ecosystem that the world’s scientific and industrial infrastructure relies upon.</p><h3>For more from Probabl:</h3><ul><li>Follow our latest updates on <a href="https://www.linkedin.com/company/probabl">LinkedIn</a></li><li>Subscribe to our <a href="https://hello.probabl.ai/subscribe-probabl-newsletter">monthly newsletter</a></li><li>Check out our technical explainer videos on <a href="https://www.youtube.com/@probabl_ai">YouTube</a></li></ul><p><strong>References:</strong></p><p>[1] <a href="https://clickpy.clickhouse.com/dashboard/scikit-learn">https://clickpy.clickhouse.com/dashboard/scikit-learn</a> Please note that PyPI downloads are a proxy for adoption and should be taken with a grain of salt; they are not the only way to download a python library, and they may not accurately convey usage.</p><p>[2] Python array API standard <a href="https://data-apis.org/array-api/latest/">https://data-apis.org/array-api/latest/</a></p><p>[3] Enabling array API support in scikit-learn <a href="https://scikit-learn.org/stable/modules/array_api.html">https://scikit-learn.org/stable/modules/array_api.html</a></p><p>[4] Colab notebook of the demo: <a href="https://colab.research.google.com/drive/1YrCt5iBPT6gnmp7geahRn_9OqCPrfoLb?usp=sharing">https://colab.research.google.com/drive/1YrCt5iBPT6gnmp7geahRn_9OqCPrfoLb?usp=sharing</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=04f73b6d137f" width="1" height="1" alt=""><hr><p><a href="https://medium.com/probabl/scikit-learn-acceleration-with-gpus-achieving-15x-speed-ups-in-complex-ml-pipelines-with-the-04f73b6d137f">scikit-learn acceleration with GPUs</a> was originally published in <a href="https://medium.com/probabl">:probabl.</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Skore Is Live: Track Your Data Science]]></title>
            <link>https://medium.com/probabl/skore-is-live-track-your-data-science-c80b932d6947?source=rss-3364d04aa73a------2</link>
            <guid isPermaLink="false">https://medium.com/p/c80b932d6947</guid>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[data-science]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <dc:creator><![CDATA[:probabl.]]></dc:creator>
            <pubDate>Thu, 05 Mar 2026 12:06:09 GMT</pubDate>
            <atom:updated>2026-03-17T11:11:35.346Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ogRXn3KC5ovHOsfj49vuTg.png" /></figure><p><em>By </em><strong><em>François Méro</em></strong><em>, CEO of Probabl</em></p><p>Two weeks ago, I wrote about the <a href="https://blog.probabl.ai/towards-data-science-grounded-in-science">five challenges holding back enterprise data science</a>: technology-first thinking, spiraling costs, vendor lock-in, a lack of industrial maturity, and the fading of scientific thinking. This week, Guillaume Lemaitre laid out <a href="https://blog.probabl.ai/next-gen-of-data-science-tooling">the principles for a new generation of data science tooling</a> “built for data scientists, by data scientists”.</p><p>Today, we are putting those principles into practice. <a href="http://probabl.ai/skore">Skore</a> is now publicly available.</p><p>Skore is the collaboration layer for teams. This first release is the first concrete step toward the future of enterprise data science we are building at Probabl. Not the finished vision. The foundation it starts from.</p><p><strong>Get started now:</strong></p><ul><li>Sign up for Skore: <a href="https://skore.probabl.ai">skore.probabl.ai</a></li><li>Learn more: <a href="https://probabl.ai/skore">probabl.ai/skore</a></li><li>Explore the code: <a href="https://github.com/probabl-ai/skore">github.com/probabl-ai/skore</a></li><li>Read the docs: <a href="https://docs.skore.probabl.ai">docs.skore.probabl.ai</a></li></ul><h3>What Skore Does Today</h3><p>If you work with scikit-learn, Skore will feel immediately familiar. Same API philosophy. Same commitment to clarity. Scikit-learn gives you powerful building blocks for machine learning, Skore extends it by giving you the guidance and structure to use them well.</p><p>Here is what you can do right now.</p><p><strong>Evaluate any model (even old ones) in one line of code.</strong> Feed your scikit-learn compatible estimator and your dataset to EstimatorReport. It automatically generates the metrics, feature importance, and plots that are most relevant to your use case. No boilerplate. No navigating through documentation to figure out which evaluation applies. Skore does that work for you, with efficient caching under the hood so everything runs fast.</p><p><strong>Cross-validate with full visibility.</strong> CrossValidationReport gives you a complete estimator report for each fold of your cross-validation. Not just a score, a structured, inspectable report per fold. You see how your model behaves across your data.</p><p><strong>Benchmark models side by side.</strong> Training several estimators? ComparisonReport lets you compare them in a structured way. No more ad hoc notebooks with copy-pasted metric tables. You get a clear, standardized comparison.</p><p><strong>Catch methodological mistakes before they matter. </strong>Skore brings together the tools you need to spot modeling issues early. Explore associations between variables to understand how your features relate to each other, relationships that could impact your modeling, and put them in perspective with the feature importance as seen by your predictive model. Combine this with utilities designed to help flag potential pitfalls in your data splitting strategy, and you have the building blocks to catch fishy patterns before they compromise your model. These are the kinds of insights experienced data scientists develop over time.</p><p><strong>Organize and persist your work.</strong> The Project system lets you save reports, experiments, and artifacts in a structured way. Everything is stored, locally or remotely. Nothing gets lost when you close a notebook.</p><p><strong>Collaborate through Skore,</strong> the collaboration layer for teams. Teams can share, compare, and build upon each other’s experiments. It brings visibility across a team’s work, standardizes workflows without slowing anyone down and frames results for decision-making; so your next stakeholder meeting starts from structured evidence, not a scramble through notebooks.</p><h3>Why This Matters</h3><p>If you are a data scientist, you know the reality of your day-to-day work. You have excellent tools at your disposal: plotly and seaborn for data exploration, scikit-learn for model training and evaluation. These libraries are powerful. They are also generic by design. They accommodate a wide range of use cases without prescribing how to use them.</p><p>That is a strength but also a challenge. Your experience is the key ingredient that determines whether those building blocks are assembled correctly. You spend time navigating documentation, writing boilerplate code for common evaluations, and maintaining project structure by hand. When you are experienced, it works. When you are under pressure, or when the team has mixed levels of seniority, things slip through. Methodology gets cut short. Context gets lost. Models reach production with flaws that could have been caught earlier (if ever).</p><p>Skore is designed and envisioned to close that gap. It acts as a conductor that transforms your way of working into structured, meaningful artifacts. It reduces the time you spend on documentation navigation, eliminates code boilerplate, and guides you toward the right methodological choices, the ones you would have made if you had infinite time and attention.</p><p>Think of it this way: scikit-learn trusts you to make the right decisions. Skore helps you actually make them, consistently, across every project.</p><h3>Our First Move, Not Our Last</h3><p>We want to be straightforward. This is early. Skore is at the beginning of its journey. We are shipping fast, and there is much more to come.</p><p>What you see today (evaluation reports, cross-validation insights, methodological diagnostics, model comparison, and team collaboration) is the first layer. It is where we deliver immediate, tangible value to any data scientist using scikit-learn.</p><p>But our ambition goes further. In the two posts that preceded this one, we laid out a vision for enterprise data science grounded in science, composability, reusability, and transparency. Skore is the vehicle for that vision. Over the coming months, you can expect:</p><ul><li><strong>Deeper</strong> <strong>guidance:</strong> starting with the scientific guardrails you already see in this release, and evolving toward contextual recommendations that learn from your practice and your organization’s data science work.</li><li><strong>AI-powered augmentation:</strong> feeding the right context from your experiments into code generators and assistants, so that AI-generated code is grounded in your specific project, not generic suggestions.</li><li><strong>Full process coverage: </strong>extending Skore upstream toward data preparation and downstream toward MLOps handoffs, always from the data scientist’s perspective.</li><li><strong>Richer collaboration</strong>: multi-audience reporting, model cards, and documentation that translates technical results into business narratives.</li></ul><p>We are building Skore the same way scikit-learn was built: step by step, guided by real-world usage, with the community as co-pilot. This release is the result of working closely with early users and our Design Partners. Their feedback and yours shape every decision.</p><h3>Who is Skore For</h3><p>Skore is for data scientists who use Python and the scikit-learn ecosystem. Whether you work alone or in a team. Whether you are building your first model or managing a portfolio of hundreds.</p><p>If you are experienced, Skore saves you time. It eliminates the repetitive evaluation code you write on every project and gives you a clean, structured record of your work.</p><p>If you are building your skills, Skore accelerates your growth. The methodological warnings and automated diagnostics encode the judgment that takes years to develop. You benefit from that expertise from day one.</p><p>If you lead a data science team, Skore gives you visibility. Through Skore, you can see how experiments progress across the team, standardize best practices without micromanaging, and present results to stakeholders in a format they can act on.</p><p>And if your company has already invested in a data science practice but struggles to scale its impact, Skore is designed precisely for you. It works with your existing stack, not against it. It plugs into your environment. It does not create vendor lock-in.</p><h3>Get Involved</h3><p>We believe the best data science tooling comes from the community that uses it.</p><ul><li>Sign up for Skore: <a href="https://skore.probabl.ai">skore.probabl.ai</a></li><li>Learn more: <a href="https://probabl.ai/skore">probabl.ai/skore</a></li><li>Explore the code: <a href="https://github.com/probabl-ai/skore">github.com/probabl-ai/skore</a></li><li>Read the docs: <a href="https://docs.skore.probabl.ai">docs.skore.probabl.ai</a></li></ul><p>We would love your feedback. File issues, contribute code, or just tell us what you think. This is the beginning. And we are building it with you.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=c80b932d6947" width="1" height="1" alt=""><hr><p><a href="https://medium.com/probabl/skore-is-live-track-your-data-science-c80b932d6947">Skore Is Live: Track Your Data Science</a> was originally published in <a href="https://medium.com/probabl">:probabl.</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[For data scientists, by data scientists: Building the next generation of data science tooling]]></title>
            <link>https://medium.com/probabl/for-data-scientists-by-data-scientists-building-the-next-generation-of-data-science-tooling-2239d0c7e688?source=rss-3364d04aa73a------2</link>
            <guid isPermaLink="false">https://medium.com/p/2239d0c7e688</guid>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[data-science]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[software-development]]></category>
            <dc:creator><![CDATA[:probabl.]]></dc:creator>
            <pubDate>Tue, 03 Mar 2026 14:59:09 GMT</pubDate>
            <atom:updated>2026-03-20T14:21:07.527Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="By Guillaume Lemaitre, VP of Product Strategy at Probabl &amp; core maintainer of scikit-learn" src="https://cdn-images-1.medium.com/max/1024/1*AAu7VswEoCCf9xjN5a5k3w.png" /></figure><p><em>By </em><strong><em>Guillaume Lemaitre</em></strong><em>, VP of Product Strategy at Probabl &amp; core maintainer of scikit-learn</em></p><p>Data science has the power to transform how enterprises understand their world and make high-impact decisions. When executed well, it delivers profound business value through optimized operations and strategic advantages that compound over time.</p><p>However, the potential for data science remains constrained by a myriad of organizational challenges and friction points that exist throughout data science workflows, from misaligned processes to fragmented tooling. It’s clear that enterprises deserve better tooling for data science — tooling designed for data scientists, by data scientists.</p><p>In this post, I break down the evidence about these friction points and present our vision at Probabl for building the next generation of data science tooling, anchored in our deep understanding of the <em>scientific</em> practice of data science.</p><h3>The status quo of enterprise data science</h3><h3>What the data says: Challenges in enterprise data science</h3><p>Industry insights provide a clear signal that we have to change the status quo of enterprise data science and AI.</p><p>According to RAND, more than 80% of AI projects fail, which is twice the rate of traditional IT projects [1]. Further research finds that 87% of projects never even reach production [2], and Gartner predicts that through 2026 60% of AI initiatives will be abandoned because the data held by enterprises is simply not yet AI-ready [3].</p><p>It’s also clear that these failures are rarely the result of poor data science or AI in itself; they tend to be organizational. For example, RAND finds that challenges often come from the top when business leaders do not clearly articulate specific problems that need to be solved and lean into technology-first thinking, prioritizing trendy AI solutions when simpler analytical approaches might actually suffice [1].</p><p>Many AI projects also fail because enterprises lack the necessary data to train models, and even when data exists, the unglamorous work of data wrangling consumes a disproportionate amount of time [1, 6]. This is compounded by a process mismatch, where frameworks designed for software engineering are applied to data science, despite it being an R&amp;D process where the end product is often unclear at the outset [4–5].</p><h3>Pinpointing the frictions that we must solve for</h3><p>If we want to build solutions that solve these challenges, we must understand exactly where frictions live in enterprise data science workflows.</p><p>To do so, let’s think of the data science workflow as it’s visualized in Figure 1. We can distinguish the types of work by their nature rather than by job titles. Upstream work centers on making raw data available and usable, while downstream work focuses on putting models into production. Between these lies the data science work–the scientific core of the process–which demands methodological rigor and careful experimental design.</p><figure><img alt="Steps and frictions in data science workflows: data preparation, data science, and machine learning model deployment." src="https://cdn-images-1.medium.com/max/842/0*0jswYAiPlsRJQb1b.png" /><figcaption><em>Figure 1: A typical data science workflow (arrows represent potential failure points)</em></figcaption></figure><p>While the data science ecosystem provides many powerful tools for these different types of work, the connective tissue to make them work as a coherent whole is still missing. What’s more, the current tooling landscape wasn’t designed with the data-scientists’ work as its focus, but rather production, which is closer to engineering. As a result, frictions and potential failure points exist throughout the data science workflow, which we must solve for.</p><p>These frictions include the following:</p><p><strong>Friction at the boundaries:</strong> Currently, the transitions between upstream work (data collection, data engineering), data science work, and downstream work (MLOps) are not always optimal. In the worst case scenarios, handoffs between stages result in lost context, rewritten code, and undocumented methodological decisions. Moving from prepared data into experimentation, or from validated model to production, requires knowledge that current tools neither capture nor transfer.</p><p><strong>Friction between stakeholders:</strong> Domain experts struggle to articulate data requirements clearly. Those leading data science teams find it difficult to translate model performance into business impact. Business leaders set objectives without understanding what AI can realistically achieve. Technical work proceeds without clarity on what success looks like.</p><p><strong>Friction within the data science work itself:</strong> Experiments need review, results require interpretation, and methodological choices must be justified. Yet without structured processes for this validation cycle, quality control risks becoming ad hoc and inconsistent.</p><p><strong>Friction in data science tooling:</strong> Most critically, existing tools tend to misunderstand what data science is. Data science is not software engineering. It transforms data–the essential ingredient–into impactful results through three dimensions: coding, business understanding, and scientific methodology. This scientific dimension changes everything. A methodological mistake can invalidate an entire project, leading teams to perfectly solve the wrong business problem, regardless of code quality or data accuracy. Existing tools do not address the core challenge of data science work–ensuring scientific excellence, maintaining methodological context across iterations, and translating statistical findings into business narratives.</p><h3>Principles for a new generation of data science tooling</h3><p>These frictions create an opportunity for a new generation of data science tooling.</p><p>At Probabl, we’re on a mission to remove these frictions and unleash the full potential of data science teams. We imagine a world where data science moves at the speed of insights, where experiments build naturally on previous work, and where the path from a question to an answer is measured in days, not months.</p><p>This requires a fundamental shift in building tools for data scientists, by data scientists. Towards this end, you can expect us to double down on the following principles.</p><h3>Data science as the core</h3><p>We build specifically for the data scientist.</p><p>This choice is rooted in our identity as stewards of scikit-learn. We have shaped how millions of practitioners practice machine learning. We understand that the data scientist holds a unique position as a hybrid professional who bridges quantitative excellence with business context. While AI can automate parts of code generation, it cannot replicate the contextual understanding and data-driven reasoning required for high-stakes decision making in enterprises. We believe the work carried out by data scientists is where better tooling is most needed to achieve faster outcomes and impact.</p><h3>Ecosystem-first architecture</h3><p>We build ecosystem-first architecture, not one-size-fits-all platforms.</p><p>We understand that data scientists have strong preferences for their tools. They are more likely to assemble a curated set of libraries within their existing environment than to adopt a rigid, all-in-one platform. Similarly, we understand that enterprises operate on diverse infrastructures. Attempting to latch a one-size-fits-all data science platform onto these heterogeneous infrastructures creates integration challenges, vendor lock-in, and operational frictions. We offer a modular, slot-in approach that respects autonomy and works with an existing technology stack rather than replacing it. This ecosystem delivers value where data scientists actually work while integrating seamlessly with diverse enterprise infrastructures.</p><h3>Designed to ground the validity of the data science practice</h3><p>Data science is fundamentally a process of empirical discovery.</p><p>What distinguishes data scientists from software engineers is not just technical skill–it is that data science is fundamentally empirical discovery. Data science exists to uncover novel insights from data–to reveal patterns, relationships, and knowledge that weren’t previously known or understood. Because data science is discovery, it naturally demands suitable methodological approaches. When you’re uncovering insights that will drive business decisions, methodological mistakes can lead to false discoveries, misguided strategies, and wasted resources. Our solutions ensure that empirical discovery is supported throughout the entire process, from data validation to stakeholder communication.</p><h3>AI-powered augmentation</h3><p>If data science is the discipline of unlocking value from data, then we should practice what we preach: we will augment data scientists by leveraging AI.</p><p>The goal is to reduce time-to-value by deploying AI where it genuinely accelerates the workflow, such as through natural language interfaces and automated diagnostics. We build intelligent systems that learn from the essence of data science practice to provide contextually relevant guidance. As advancements emerge like the SOTA tabular foundation models TabICL [7] and TabPFN [8], we integrate these innovations while maintaining the principle that AI should always empower the data scientist rather than replace him or her.</p><h3>Full process coverage</h3><p>To truly serve data scientists, we must address friction across the entire workflow.</p><p>Upstream, we bring databases closer to predictive modeling to reduce reliance on constant intervention from data engineers. Downstream, we view MLOps through the practitioner’s lens to ensure seamless handoffs while preserving scientific context. Finally, we provide tools to help translate technical results into business language, recognizing that communication is where value is ultimately realized or lost. By providing tools that help data scientists explain model behavior and translate technical results into business narratives, we seek to ensure that scientific excellence leads to real-world impact.</p><h3>Putting these principles into practice</h3><p>At Probabl, we’re committed to putting our principles into practice. Just as we standardized the practice of machine learning by creating scikit-learn, we intend to do the same for the entire enterprise data science practice.</p><h3>For more from Probabl:</h3><ul><li>Follow our latest updates on <a href="https://www.linkedin.com/company/probabl">LinkedIn</a></li><li>Subscribe to our <a href="https://hello.probabl.ai/subscribe-probabl-newsletter">monthly newsletter</a></li><li>Check out our technical explainer videos on <a href="https://www.youtube.com/@probabl_ai">YouTube</a></li></ul><p><strong>References:</strong></p><p>[1] Ryseff, J., Bruzelius, E., &amp; Scobell, W. (2024). <em>The Root Causes of Failure for Artificial Intelligence Projects and How They Can Succeed: Avoiding the Anti-Patterns of AI</em>. RAND Corporation. <a href="https://www.rand.org/pubs/research_reports/RRA2680-1.html">https://www.rand.org/pubs/research_reports/RRA2680-1.html</a></p><p>[2] Lorica, B. (2019, July 19). Why do 87% of data science projects never make it into production? <em>VentureBeat</em>. <a href="https://venturebeat.com/ai/why-do-87-of-data-science-projects-never-make-it-into-production/">https://venturebeat.com/ai/why-do-87-of-data-science-projects-never-make-it-into-production/</a></p><p>[3] Gartner. (2025, February 26). Lack of AI-Ready Data Puts AI Projects at Risk [Press release]. <a href="https://www.gartner.com/en/newsroom/press-releases/2025-02-26-lack-of-ai-ready-data-puts-ai-projects-at-risk">https://www.gartner.com/en/newsroom/press-releases/2025-02-26-lack-of-ai-ready-data-puts-ai-projects-at-risk</a></p><p>[4] Freischlag, C. (2022). Machine learning projects and Scrum: A major mismatch. <em>Towards Data Science</em>. <a href="https://medium.com/data-science/machine-learning-projects-and-scrum-a-major-mismatch-c155ad8e2eee">https://medium.com/data-science/machine-learning-projects-and-scrum-a-major-mismatch-c155ad8e2eee</a></p><p>[5] Ribeiro, D. (2025). Why Agile Doesn’t Work for Data Science and How DSLP Fills the Gap. <em>LinkedIn</em>. <a href="https://www.linkedin.com/pulse/why-agile-doesnt-work-data-science-how-dslp-fills-gap-diogo-ribeiro-gnirf/">https://www.linkedin.com/pulse/why-agile-doesnt-work-data-science-how-dslp-fills-gap-diogo-ribeiro-gnirf/</a></p><p>[6] Oleli, D. (2018, July 13). Bridging The Data Scientist Talent Gap Starts With Defining The Current Role. <em>Forbes</em>. <a href="https://www.forbes.com/sites/forbestechcouncil/2018/07/13/bridging-the-data-scientist-talent-gap-starts-with-defining-the-current-role/">https://www.forbes.com/sites/forbestechcouncil/2018/07/13/bridging-the-data-scientist-talent-gap-starts-with-defining-the-current-role/</a></p><p>[7] Jingang Qu, David Holzmüller, Gaël Varoquaux, Marine Le Morvan. (2026). TabICLv2: A better, faster, scalable, and open tabular foundation model. <a href="https://arxiv.org/abs/2602.11139">https://arxiv.org/abs/2602.11139</a> Code: <a href="https://github.com/soda-inria/tabicl">https://github.com/soda-inria/tabicl</a> Installation: <a href="https://pypi.org/project/tabicl/">https://pypi.org/project/tabicl/</a></p><p>[8] Noah Hollmann, Samuel Müller, Katharina Eggensperger, Frank Hutter. (2023). TabPFN: A Transformer That Solves Small Tabular Classification Problems in a Second. <a href="https://arxiv.org/abs/2207.01848">https://arxiv.org/abs/2207.01848</a>. Code: <a href="https://github.com/PriorLabs/TabPFN">https://github.com/PriorLabs/TabPFN</a>. Installation: <a href="https://pypi.org/project/tabpfn/">https://pypi.org/project/tabpfn/</a>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=2239d0c7e688" width="1" height="1" alt=""><hr><p><a href="https://medium.com/probabl/for-data-scientists-by-data-scientists-building-the-next-generation-of-data-science-tooling-2239d0c7e688">For data scientists, by data scientists: Building the next generation of data science tooling</a> was originally published in <a href="https://medium.com/probabl">:probabl.</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Maintaining open source in the age of generative AI: Recommendations for maintainers and…]]></title>
            <link>https://medium.com/probabl/maintaining-open-source-in-the-age-of-generative-ai-recommendations-for-maintainers-and-7e356a58749a?source=rss-3364d04aa73a------2</link>
            <guid isPermaLink="false">https://medium.com/p/7e356a58749a</guid>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[software-engineering]]></category>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[open-source]]></category>
            <dc:creator><![CDATA[:probabl.]]></dc:creator>
            <pubDate>Tue, 24 Feb 2026 16:45:19 GMT</pubDate>
            <atom:updated>2026-03-20T14:43:34.369Z</atom:updated>
            <content:encoded><![CDATA[<h3>Maintaining open source in the age of generative AI: Recommendations for maintainers and contributors</h3><figure><img alt="By Adrin Jalali, VP of Labs at Probabl &amp; scikit-learn core maintainer, and Cailean Osborne, Head of Ecosystem Development at Probabl" src="https://cdn-images-1.medium.com/max/1024/1*HU_jxj5-SAtLn8NK7Zu4lw.png" /></figure><p><em>By </em><strong><em>Adrin Jalali</em></strong><em>, VP of Labs at Probabl &amp; scikit-learn core maintainer, and </em><strong><em>Cailean Osborne</em></strong><em>, Head of Ecosystem Development at Probabl</em></p><p>Almost 7 years ago, Ralf Gommers from Quansight wrote an excellent blog post about <a href="https://rgommers.github.io/2019/06/the-cost-of-an-open-source-contribution/">the cost of an open source contribution</a>, where he described what happens when a contribution comes into an open source project, and the subsequent challenges and bottlenecks for maintainers. Since then, open source has become even more mainstream. The number of contributors has gone up, while the number of maintainers in many projects has stayed more or less the same.</p><p>On top of that, with the expansion of AI tools and their general availability, an increasing number of people seem to be trying their luck at “vibe contributing” to open source. Recently, it has become rather straightforward to prompt an IDE or an agent to read guidelines, claim an issue, and submit a solution without the contributor ever truly understanding their code.</p><p>Maintainers, like our colleagues who maintain core data science libraries like scikit-learn and skore, are increasingly submerged in LLM-generated comments, auto-generated issues, and many low-quality PRs that do not advance the project, such as redundant contributions to README files and ones that claim significant performance gains while failing basic linting tests.</p><p>These come almost exclusively from “first-time contributors” who seem to want to have a contribution in our project without understanding their submissions. The issue has gotten so bad that at times almost every second issue on our main repo gets at least one such message, in many cases multiple ones, to the point where some of our peers dread opening issues since they don’t want to deal with interacting with AI.</p><p>While the cost of writing and contributing code has shrunk thanks to AI, the cost of reviewing and maintaining code hasn’t. Given these developments, maintainers from many open source communities have been deliberating what to do. Some reject all AI-generated contributions, saying they have never seen useful ones or that the risks of copyright infringement or license violations are simply too great. Meanwhile others are proactively trying to steer the types of AI-generated contributions they receive by implementing solutions like AI contribution policies and AGENTS.md files.</p><p>So, what should open source maintainers do? There is clearly no single way to deal with AI-generated contributions and by no means do we want to prescribe the <em>best</em> ways to go about it. Instead, we’ve summarized discussions and solutions we’ve seen lately in scikit-learn as well as other open source communities, with the hope that gathering this information in one place may be useful for others asking themselves the same question.</p><h3>The surge of AI-generated contributions</h3><p>In <a href="https://adrin.info/the-cost-of-ai-in-open-source-maintenance.html">“The Cost of AI in Open Source Maintenance,”</a> I (Adrin) wrote about the types of AI-generated contributions to scikit-learn, the impacts on the maintainers, as well as recommendations to future contributors. As described there, in scikit-learn, we have encountered the following types of interactions with users who post LLM-generated content.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/779/0*6y3pLqF_GvcG2wOG.png" /></figure><p><em>Table 1: Types of AI-generated contributions to scikit-learn</em></p><p>These kinds of AI-generated contributions are creating a lot of extra work for us at scikit-learn and taking up valuable time that could be better used for other tasks like onboarding genuinely motivated contributors or advancing the priorities in our roadmap.</p><p>Our struggle isn’t happening in a vacuum though. This has become an open source-wide challenge. A series of recent discussions show this trend affects open source communities across the board, and maintainers are debating different ways to adapt to this new reality.</p><p>Just to name a few, take a look at the list of discussions in the last month alone that our colleague and scikit-learn core developer, Stefanie Senger, shared in <a href="https://github.com/scikit-learn/scikit-learn/issues/31679">issue #31679</a> in scikit-learn:</p><ul><li>January 27, 2026: Github Community Discussions: <a href="https://github.com/orgs/community/discussions/185387">Exploring Solutions to Tackle Low-Quality Contributions on GitHub.</a></li><li>January 29, 2026: Scientific Python blog post on <a href="https://blog.scientific-python.org/scientific-python/community-considerations-around-ai/">Community Considerations Around AI Contributions.</a></li><li>February 3, 2026: <a href="https://www.theregister.com/2026/02/03/github_kill_switch_pull_requests_ai">GitHub ponders kill switch for pull requests to stop AI slop.</a></li><li>February 6, 2026: Numpy Mailing List: <a href="https://mail.python.org/archives/list/numpy-discussion@python.org/thread/LAR7P3KQWHWAIKYSHTS2MY7X4HUBVA3L/">Current policy on AI-generated code in NumPy.</a></li><li>February 12, 2026: Scott Shambaugh’s blog post: <a href="https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me/">An AI Agent Published a Hit Piece on Me</a> related to closing an AI-generated PR:<a href="https://github.com/matplotlib/matplotlib/pull/31132"> matplotlib/matplotlib#31132.</a></li><li>February 14, 2026: <a href="https://medium.com/reading-sh/github-finally-acknowledged-open-sources-spam-crisis-a32b22a6699c">GitHub acknowledged open source’s spam crisis</a> with a nice timeline on the most recent developments.</li></ul><h3>Recommendations for maintainers</h3><p>If you’re a maintainer and you’re wondering what to do about AI-generated contributions to your project, perhaps some of the following approaches might be helpful.</p><h3>Establish an AI use policy</h3><p>Vague guidelines about AI use are not sufficient. In scikit-learn, we have codified our stance in our <a href="https://scikit-learn.org/dev/developers/contributing.html#automated-contributions-policy">Automated Contributions Policy</a>. Basically, contributions require human judgment. It states that maintainers reserve the right to close fully automated submissions and if judged appropriate, ban the user from the GitHub organisation. Contributors are also required to <a href="https://github.com/scikit-learn/scikit-learn/blob/main/.github/PULL_REQUEST_TEMPLATE.md">disclose AI usage</a>. This works well because it empowers the maintainers to act decisively.</p><p>Below we’ve summarized acceptable and unacceptable uses that are mentioned in AI tool use policies and/or guidelines by open source projects and communities. Melissa Weber Mendonça of Quansight has also created this useful <a href="https://github.com/melissawm/open-source-ai-contribution-policies">repo</a> with policies of many other projects, which we recommend checking out.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/806/1*_cvQXbu89YJFNPPphgA3MA.png" /></figure><p><em>Table 2: Examples of AI tool use policies and/or guidelines. URLs: </em><a href="https://scikit-learn.org/dev/developers/contributing.html#automated-contributions-policy"><em>scikit-learn</em></a><em>, </em><a href="https://blog.scientific-python.org/scientific-python/community-considerations-around-ai/"><em>Scientific Python</em></a><em>, </em><a href="https://devguide.python.org/getting-started/generative-ai/"><em>Python Developer’s Guide (CPython Team)</em></a><em>, </em><a href="https://www.apache.org/legal/generative-tooling.html"><em>Apache Software Foundation</em></a><em>, </em><a href="https://www.linuxfoundation.org/legal/generative-ai"><em>Linux Foundation</em></a><em>, </em><a href="https://firefox-source-docs.mozilla.org/contributing/ai-coding.html"><em>Mozilla Firefox</em></a><em>, </em><a href="https://github.com/zulip/zulip/blob/main/CONTRIBUTING.md#ai-use-policy-and-guidelines"><em>Zulip</em></a><em>, and </em><a href="https://castle-engine.io/ai"><em>Castle Engine</em></a><em>.</em></p><h3>Principles you may want to include in your AI use policy</h3><p>AI contribution policies come in many flavors. If you’re considering developing one for your project, you may want to take a look at the principles advocated by the Scientific Python community. In January 2026, Stefan van der Walt proposed <a href="https://blog.scientific-python.org/scientific-python/community-considerations-around-ai/">four principles</a>:</p><ol><li>Be transparent</li><li>Take responsibility</li><li>Gain understanding</li><li>Honor Copyright</li></ol><p>In the <a href="https://mail.python.org/archives/list/numpy-discussion@python.org/thread/LAR7P3KQWHWAIKYSHTS2MY7X4HUBVA3L/">NumPy mailing list</a>, Ralf Gommers recently added a fifth principle: “we want to interact with other humans, not machines,” which LLVM recently adopted in their <a href="https://llvm.org/docs/AIToolPolicy.html">AI Tool Policy</a>.</p><h3>Create agent guidance like an AGENTS.md file</h3><p>More and more projects are adding an <a href="http://agents.md">AGENTS.md file</a> to their repo, which gives explicit instructions for how agents should interact in the repo. This is a strategy for driving up the quality of automated interactions by giving agents better rules to follow.</p><p>For example, the Apache Airflow maintainers added this <a href="https://github.com/apache/airflow/blob/main/AGENTS.md">AGENTS.md</a> file to their repo. Jarek Potiuk from Airflow has commented that historically they had very good human-targeted documentation for contributors, and they suspect that it helped drive up the quality of AI-generated contributions. Jarek has recommended developing good instructions and embracing the AI contributions that follow.</p><h3>Prepare tooling and instructions for AI use in contributions</h3><p>Jarek Potiuk from Apache Airflow has also recommended developing bespoke tooling and instructions for AI use, and in turn deliberately inviting contributors to use AI in their contributions accordingly. In the <a href="https://github.com/apache/airflow/blob/main/contributing-docs/05_pull_requests.rst#gen-ai-assisted-contributions">Apache Airflow contributing docs</a>, they list guidelines for contributors who use AI tools to help them create PRs. For example, <a href="https://news.apache.org/foundation/entry/ai-and-open-source-expanding-apache-airflows-global-impact-through-collaboration">here</a> they mention that for a large-scale UI documentation translation task, they “developed custom tooling that helped to more easily apply regular tools like coding LLM integration to aid translation efforts.” Their guidelines also mention how the maintainers may proceed if a contributor ignores, repeatedly ignores, or spams the project.</p><h3>Onboard new contributors via structured programs</h3><p>The “good first issue” label has become a magnet for automated agents. Thibaud Colas from wagtail <a href="https://wagtail.org/blog/open-source-maintenance-new-contributors-and-ai-agents/">shared</a> that they are now using matchmaking programs like <a href="https://djangonaut.space/">Djangonaut Space</a> or <a href="https://www.outreachy.org/">Outreachy</a> to onboard new contributors. These programs are working for them because they ensure a sustained commitment and human-to-human connection that AI tools cannot replicate.</p><h3>Be open to how contributors may be using AI tools to learn and improve</h3><p>In the <a href="https://mail.python.org/archives/list/numpy-discussion@python.org/thread/LAR7P3KQWHWAIKYSHTS2MY7X4HUBVA3L/">NumPy mailing list</a>, Ralf Gommers from Quansight recently challenged the belief that AI tools are just making contributors dumber, suggesting they can actually facilitate learning. For example, they can be used to automate routine tasks once mastered, brainstorm design options, and write documentation that maintainers may be too busy to produce, among others.</p><h3>Recommendations for contributors</h3><p>Now, let’s turn to open source contributors who use AI tools.</p><p>In this day and age, it would be unreasonable to expect folks not to use any AI tools. Many of us use these tools one way or another. However, you should never submit contributions without understanding what you’re submitting.</p><p>Basically, if you are using AI tools to make open source contributions, the goal should be to reach a state of understanding where you no longer need the tool to explain your own work. You should also be spending <em>at least</em> as much time creating your contribution, as it takes a maintainer to review it. <a href="https://llvm.org/docs/AIToolPolicy.html">LLVM’s</a> golden rule is that a contribution should be worth more to the project than the time it takes to review it.</p><p>As mentioned in the September blog post, below are some recommended ways that you can engage with your AI tools when contributing to open source:</p><ul><li><strong>Explain the codebase:</strong> Use tools to help you find where a function is defined or to explain a complex regex. This speeds up your learning curve without adding noise to the issue tracker.</li><li><strong>Help with boilerplate:</strong> AI is excellent at generating repetitive test structures. Use it for drudgery, but write the logic yourself.</li><li><strong>Drafting for non-native speakers:</strong> We welcome AI help with English grammar and clarity. This makes open source more accessible.</li><li><strong>Brainstorming:</strong> Use an LLM to suggest multiple design options. This broadens discovery, but the final decision must be yours.</li></ul><p>Below are additional recommendations from others:</p><ul><li><strong>Principles for high-quality contributions:</strong> The <a href="https://devguide.python.org/getting-started/generative-ai/">Generative AI</a> policy in the Python Developer’s Guide recommends that contributors bear in mind four principles when making a contribution: consider whether the change is necessary; make minimal, focused changes; follow existing coding style and patterns; and write tests that exercise the change.</li><li><strong>Transparency and responsibility: </strong>If you use an AI to help you make your contribution, be honest. Projects like Wagtail now <a href="https://wagtail.org/blog/open-source-maintenance-new-contributors-and-ai-agents/">request explicit disclosure</a> of AI use. You must take full responsibility for every character you submit. If a maintainer asks why a certain choice was made, the answer should never be: “I am not sure, the AI did it.”</li><li><strong>Let the problems find you: </strong>Avoid drive-by portfolio building. Instead of browsing for random issues, follow the advice of <a href="https://github.com/Quansight/Quansight-website/pull/945/changes">Marco Gorelli</a>: contribute to the tools you actually use. When you encounter a bug in your own workflow, turning your frustration into a contribution is the most rewarding way to start. We know it’s hard to get started with a project. If you’re genuinely interested, drop a line to a maintainer and ask how you can help.</li></ul><p>Our closing message to contributors is: We want to work with you, mentor you, and see you make genuine attempts to solve problems. If you use AI tools to empower your curiosity–if you are reviewing, testing, and understanding every line you submit–then you are the kind of contributor we are excited to welcome. If you want to get involved in scikit-learn, check out our guidance for first-time contributors <a href="https://scikit-learn.org/0.21/developers/contributing.html">here</a>.</p><h3>Closing thoughts</h3><p>There are many ways maintainers may decide to tackle the surge of AI-generated contributions to open source projects, and ultimately it’s up to you and your community to build consensus on <em>your</em> way to go.</p><p>Our bottom line is that since AI tool usage is so prominent now, we should be proactive and intentional about shaping good policies and practices. This is in the long-term interest of our projects and open source in general. This means sharing our learnings, engaging in debate, and building resources together.</p><h3>For more from Probabl:</h3><ul><li>Follow our latest updates on <a href="https://www.linkedin.com/company/probabl">LinkedIn</a></li><li>Subscribe to our <a href="https://hello.probabl.ai/subscribe-probabl-newsletter">monthly newsletter</a></li><li>Check out our technical explainer videos on <a href="https://www.youtube.com/@probabl_ai">YouTube</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=7e356a58749a" width="1" height="1" alt=""><hr><p><a href="https://medium.com/probabl/maintaining-open-source-in-the-age-of-generative-ai-recommendations-for-maintainers-and-7e356a58749a">Maintaining open source in the age of generative AI: Recommendations for maintainers and…</a> was originally published in <a href="https://medium.com/probabl">:probabl.</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Beyond the hype: Charting a new direction for enterprise data science grounded in science]]></title>
            <link>https://medium.com/probabl/beyond-the-hype-charting-a-new-direction-for-enterprise-data-science-grounded-in-science-f64517ff0051?source=rss-3364d04aa73a------2</link>
            <guid isPermaLink="false">https://medium.com/p/f64517ff0051</guid>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[data-science]]></category>
            <category><![CDATA[technology]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[machine-learning]]></category>
            <dc:creator><![CDATA[:probabl.]]></dc:creator>
            <pubDate>Tue, 17 Feb 2026 15:06:00 GMT</pubDate>
            <atom:updated>2026-03-20T14:04:39.851Z</atom:updated>
            <content:encoded><![CDATA[<h3>Beyond the hype: Charting a new direction for enterprise data science</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*yujZlAJoBKa0uHdlbDX6UA.png" /></figure><p><em>By </em><strong><em>François Méro</em></strong><em>, CEO of Probabl</em></p><p>The data science industry is at an inflection point. We are surrounded by high-velocity innovation and new AI technologies that have the potential to create entirely new paradigms for how we work, communicate, and even make decisions.</p><p>This momentum is certainly a testament to the incredible talent and ambition in our industry. But as we push forward, we must acknowledge that the long-term success of enterprise data science is held back by critical challenges–be it the technology-first mindset that tempts business leaders to replace proven processes with AI tools that promise magic but ultimately deliver opacity, or spiraling pay-as-you-go costs that hamper economies of scale and all-in strategies that lead to vendor lock-in.</p><p>In light of these industry trends, I would like to suggest a different way forward for enterprise data science; one that turns data science into the industrial-grade practice it deserves to be and one that empowers enterprises to own their data science and ultimately achieve return on the money invested.</p><h3>5 challenges holding back industrial-grade data science</h3><p>Let’s be brutally honest with ourselves: the practice of industrial-grade data science has not yet achieved its full potential. If we, the data science industry, want to realize the long-term success of enterprise data science, we must ambitiously tackle the challenges that we face. Consider the following five which my team at Probabl and I believe are critical.</p><h4>The rising tide of technology-first thinking</h4><p>New technologies have the potential to create new paradigms and AI tooling momentum tells enterprises that legacy applications and processes must be replaced because AI will surpass human creativity and productivity. When innovation in data science doesn’t allow you to understand and reuse your existing experiments and models, it creates technical debt and amplifies costs.</p><h4>The pay-as-you-go trap</h4><p>On-demand pricing has become the norm. Pay for compute. Pay for GPUs. Pay for tokens. Costs spiral out of control. Budget forecasting becomes impossible. Scaling your business no longer creates economies of scale, it creates uncontrollable OPEX expansion. When you give your suppliers open-ended access to your bank account, your expansion generates their profits, not yours.</p><h4>All-in strategies create lock-in</h4><p>Cloud-only, GPU-only or AI-only sound like modern and decisive strategies. They create strategic dependencies that contradict long-term value creation. When you lose autonomy, you lose freedom of movement, and your infrastructure decisions become vendor lock-in.</p><h4>Data science has not reached the industrial maturity it deserves</h4><p>Machine learning models rarely make it to production. Experiments are lost when team members leave, and reproducibility remains an aspiration rather than a standard. The discipline has grown in adoption but not always in rigor. Practitioners still reinvent wheels, lack shared quality standards, and operate without the engineering discipline that data science deserves. When most data science work never delivers business value because it can’t scale beyond notebooks and proof-of-concepts, it’s the lack of industrial-grade practice.</p><h4>Scientific thinking has been forgotten</h4><p>Data science is not software engineering. It requires a different discipline. Adopting new data science technologies should not undermine peer review, explainability, and ultimately trust. It should not create opacity and remove your control over business-critical systems. Because methodology matters, because statistical rigor matters, because explainability and understanding of your models matters, you should not rush to replace scientific discipline with automated tools that promise magic but deliver opacity.</p><h3>Another way forward: Bringing the science of data to the world</h3><p>To tackle these challenges, we must take a pragmatic shift to the practice of data science. At Probabl, we advocate firmly for an approach that is built on the following principles.</p><h4>Transparency and explainability lead to ownership, trust, and impact</h4><p>When you understand and can see how your models work, you can improve and trust them. Trust enables confidence in your decisions and accountability in your results. Understanding drives business value and competitive advantage.</p><h4>Composability leads to agility and independence</h4><p>We believe in agility and independence. By choosing tools that are modular and plug into your existing stack, you retain the freedom to adapt to change and choose the best tool for each specific use case. This ensures you control your destiny and pay the right price rather than being forced into a walled garden or vendor lock-in.</p><h4>Reusability leads to economies of scale</h4><p>Innovation should not mean that your existing investments become obsolete. When past experiments and models are treated as building blocks, you can build on experience and create true long-term value.</p><h4>Science first</h4><p>Data science was born from the scientific method–hypothesis, experimentation, measurement, and peer review. These foundations are precisely why data science creates value for enterprises. When science comes first, you start with the problem, not the tool. You validate before you deploy. You question before you trust. Methodology should drive tooling, not the other way around. At Probabl, we advocate for starting with the problem rather than the tool, validating before you deploy, and ensuring methodology drives your tooling, not the other way around.</p><p>By returning to these principles, we can move away from automated tools that promise magic but deliver opacity, and return to the rigor and strategic autonomy that business-critical systems require.</p><h3>Putting these principles into practice</h3><p>Something is brewing at Probabl.</p><p>Just as our founders worked to unify the practice of machine learning with scikit-learn, we have spent the last year building something new for enterprise data science teams–for those who refuse to choose between scientific rigor and industrial scale; for those who want to move fast without losing control; and indeed for those who want to own their data science.</p><p>Our team standardized machine learning once. Now, we’re aiming to do the same for the entire enterprise data science practice.</p><p>Don’t miss the reveal. To be the first to know what we’re launching in March, join our <a href="https://probabl.ai/skore">Public Launch Waitlist</a> and follow us on <a href="https://www.linkedin.com/company/probabl/">LinkedIn</a>.</p><h3>For more from Probabl:</h3><ul><li>Follow our latest updates on <a href="https://www.linkedin.com/company/probabl">LinkedIn</a></li><li>Subscribe to our <a href="https://hello.probabl.ai/subscribe-probabl-newsletter">monthly newsletter</a></li><li>Check out our technical explainer videos on <a href="https://www.youtube.com/@probabl_ai">YouTube</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=f64517ff0051" width="1" height="1" alt=""><hr><p><a href="https://medium.com/probabl/beyond-the-hype-charting-a-new-direction-for-enterprise-data-science-grounded-in-science-f64517ff0051">Beyond the hype: Charting a new direction for enterprise data science grounded in science</a> was originally published in <a href="https://medium.com/probabl">:probabl.</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>