close
Brought to you by Michael and Brian - take a Talk Python course or get Brian's pytest book

#477: Lazy, Frozen, and 31% Lighter

Published Mon, Apr 20, 2026, recorded Mon, Apr 20, 2026
Watch this episode on YouTube
Play on YouTube
Watch the live stream replay

About the show

Sponsored by us! Support our work through:

Michael #1: Django Modern Rest

  • Modern REST framework for Django with types and async support
  • Supports Pydantic, Attrs, and msgspec
  • Has ai coding support with llms.txt
  • See an example at the “showcase” section

Brian #2: Already playing with Python 3.15

  • 3.15.0a8, 2.14.4 and 3.13.13 are out
    • Hugo von Kemenade
  • beta comes in May, CRs in Sept, and Final planned for October
  • But still, there’s awesome stuff here already, here’s what I’m looking forward to:
    • PEP 810: Explicit lazy imports
    • PEP 814: frozendict built-in type
    • PEP 798: Unpacking in comprehensions with * and **
    • PEP 686: Python now uses UTF-8 as the default encoding

Michael #3: Cutting Python Web App Memory Over 31%

  • I cut 3.2 GB of memory usage from our Python web apps using five techniques:
    • async workers
    • import isolation
    • the Raw+DC database pattern
    • local imports for heavy libraries
    • disk-based caching
  • See the full article for details.

Brian #4: tryke - A Rust-based Ptyhon test runner with a Jest-style API

  • Justin Chapman
  • Watch mode, Native async support, Fast test discovery, In-source testing, Support for doctests, Client/server mode for fast editor integrations, Pretty, per-assertion diagnostics, Filtering and marks, Changed mode (like pytest-picked), Concurrent tests, Soft assertions,
  • JSON, JUnit, Dot, and LLM reporters
  • Honestly haven’t tried it yet, but you know, I’m kinda a fan of thinking outside the box with testing strategies so I welcome new ideas.

Extras

Brian:

  • Why are’t we uv yet?
    • Interesting take on the “agents prefer pip”
    • Problem with analysis.
      • Many projects are libraries and don’t publish uv.lock file
      • Even with uv, it still often seen as a developer preference for non-libarries. You can sitll use uv with requirements.txt
  • PyCon US 2026 talks schedule is up
    • Interesting that there’s an AI track now. I won’t be attending, but I might have a bot watch the videos and summarize for me. :)
  • What has technology done to us?
    • Justin Jackson
  • Lean TDD new cover
    • Also, 0.6.1 is so ready for me to start f-ing reading the audio book and get on with this shipping the actual f-ing book and yes I realize I seem like I’m old because I use “f-ing” while typing. Michael:
  • Python 3.14.4 is out
  • Beanie 2.1 release

Joke: HumanDB - Blazingly slow. Emotionally consistent.

Episode Transcript

Collapse transcript

00:00 Hello and welcome to Python Bytes, where we deliver Python news and headlines directly to your earbuds.

00:05 This is episode 477, recorded April 20th, 2026.

00:10 I am Brian Okken.

00:11 And I'm Michael Kennedy.

00:12 And this episode is sponsored by us and you guys.

00:18 So there's a bunch of courses over at Talk Python Training.

00:21 There's pytest courses over at PythonByte, wait, Pythontest.com, my own site.

00:27 I forgot the name of it.

00:28 But thanks to everybody for the Patreon supporters and a lot of people encouraging me and grabbing copies of LeanTDD.

00:36 And I've gotten some good feedback.

00:38 So I'll plug that later in the show as well.

00:41 Yeah, if you'd like to reach out, send us topics.

00:43 One of the topics I'm covering is from somebody that sent it in.

00:47 And we really appreciate that.

00:49 So get a hold of us through Bluesky, Mastodon, or through email or the contact form on pythonbytes.fm.

00:56 And all of that info is at pythonbytes.fm.

00:59 If you are listening and you're thinking, hey, I'd like to watch this live sometime, you can just head on over to pythonbytes.fm.

01:07 Or just look around.

01:09 You can find links to watch us live on YouTube or watch the past episodes.

01:14 And finally, please join the newsletter because we send out links and information and background information.

01:22 And some people have mentioned to us before that some topics are a little over their head.

01:27 But we don't want that.

01:28 So we send you some background information so that you can understand every topic we talk about.

01:33 And with that, I'll take a rest and it'll be your turn.

01:38 You know what?

01:38 I need to take a rest too, honestly, Brian.

01:40 I've had a long weekend, big party for my wife and had a bunch of our friends in.

01:47 We rented this party van bus thing.

01:49 I drove around a bunch of wineries.

01:51 And I just need to rest.

01:52 But, you know, the Django rest type?

01:55 I don't know.

01:56 I need legit rest.

01:57 I need legit rest.

01:58 But I'm going to tell you about Django rest.

02:00 In fact, modern rest, which is a framework for Django that is type-based.

02:07 So all the classes use runtime type information to do all of their magic, right?

02:13 Not just autocomplete or blending.

02:16 And it has true async support.

02:18 So think Django Ninja-like, but with a different take, okay?

02:24 So this is a pretty cool project here.

02:26 And, you know, I just looked at it and thought, you know, this looks like something that's really fun.

02:31 It actually, one of the differences than, say, Django Ninja is it supports multiple model foundations, I guess.

02:41 So Pydantic, which is the first one listed here, which is great.

02:44 But also msgspec and Attrs.

02:47 Remember Atters?

02:48 Like Attrs is still a thing.

02:49 Yeah.

02:50 From Hennick and, you know, kind of data class style.

02:53 And one of the things that's interesting here is it says if you use msgspec, it allows you 5 to 15 times faster APIs than the alternative.

03:02 msgspec is all about ultra-compact exchange on the wire type of thing.

03:07 Okay.

03:07 Also has true support for ASGI, ASGI, async applications.

03:13 But one of the things that's interesting is it's just good old Django.

03:16 Like nothing too new.

03:19 Nothing that you wouldn't expect.

03:21 So if you're doing Django, it feels just like, yeah, that totally fits in.

03:24 There's a getting started page here, which is a little zoomed for all of us, that we can go down and sort of go through.

03:31 It's pretty interesting.

03:32 One of the things that's interesting also supports PyPy, P-Y-P-Y, not the mispronunciation of PyPy, but literally PyPy, which I think is interesting.

03:42 And Django 4.2 or above.

03:44 Hat tip to an upcoming topic.

03:46 The default recommended way to install it is uv, then Poetry, and then Pip.

03:50 So that's pretty cool.

03:51 So you've got to do things like when you install it or you set it up, you'd say, I want Django modern rest as a package bracket Pydantic or bracket adders or bracket msgspec so that you get your various dependencies installed that you're going to need.

04:08 Or, you know, just whatever.

04:09 Just put Pydantic as a dependency as well.

04:10 Then you're good to go.

04:11 Also interesting.

04:12 Remember I talked about LLMs.txt and how I added that to Talk Python so people can get better.

04:19 LLMs understand better how to work with Talk Python.

04:20 They also understand how better to work with my courses.

04:24 So this does this as well.

04:25 It has explicitly a LLMs.txt and a LLMs-full.txt.

04:32 So if you just say, hey, Claude or whatever I'm working with, I'm going to start this project.

04:37 And it's using Django modern rest, a pretty new framework.

04:40 You might not know it.

04:41 So you can actually just drop that URL and say, please read this before you begin this project and make a note that this is a resource for you.

04:48 It also has support for Context 7.

04:51 Are you familiar with Context 7?

04:52 No.

04:53 So Context 7 is, I honestly don't really know what to make of Context 7.

04:57 I thought I understood it, but I kind of don't necessarily.

05:00 But what it does is Context 7 is a website where you can enter different libraries into.

05:06 And then they parse it and turn it into something that AIs can use to understand that library better.

05:13 I'm not sure how great it works, but you come in here and it has different skills, for example.

05:18 Like it has a Django modern rest from Django rest framework skill.

05:23 So you could give it this skill and say, hey, use this agent that understands both of these frameworks because I want to upgrade from Django rest framework DRF, which has some of the craziness that we talked about last week.

05:37 Remember?

05:37 Or two weeks ago, but last episode.

05:39 Anyway, this is a pretty cool project.

05:42 Interesting.

05:43 Yeah, yeah, yeah.

05:44 It's got stuff for, hey, for mine.

05:47 Let's see what it says about it.

05:48 I don't know.

05:48 Actually, it just takes me to it.

05:49 But, you know, it's got, you can submit your own library to this, by the way.

05:54 So that's, I think, how this got here.

05:55 I think I may have submitted this and so on.

05:58 But, yeah, anyway, you can say, hey, I want AIs to understand my library better.

06:01 And this also has a MCP, which you can install.

06:04 Actually, I'm not a super huge fan of it.

06:06 I got other things that I do for this.

06:07 But, anyway, it's interesting that they explicitly went to that effort to help you get started, both converting and just working with.

06:15 All right, so let's look at this showcase.

06:16 Like, notice here, oh, actually, I gave them some short shrift here.

06:21 Look at this.

06:22 They do message spec, which is cool.

06:23 So you can do msgspec.

06:25 But they also do Pydantic, Atchers, Data Classes.

06:28 We're going to be coming back to that.

06:29 Typed Dict.

06:30 Now, that I did not see coming.

06:32 In Typed Dict.

06:32 And straight named tuples as one of your foundations, if you like.

06:36 How interesting is this?

06:37 Okay.

06:38 All right, so let's go down.

06:39 If you scroll down, I didn't want to do the, I'm going to do message.

06:41 I'll do Pydantic, whatever.

06:43 So we can go down a little bit further.

06:44 And there's a full example here.

06:46 And it just shows you, like, a one file Django thing.

06:50 So it shows you how to set up your Django app, your templates, et cetera, et cetera.

06:53 And then you just can create, you create these models for data exchange in your web app, which I think is pretty interesting.

07:00 Because a lot of people think of the model exchange to be, like, basically database classes.

07:06 But that's not really what you want.

07:07 You want, like, what does this form submit?

07:09 Or what does this API receive as data, regardless of how we store the database, right?

07:13 Yeah.

07:14 So as user create model, which just has an email, or a user response model, which just has a UUID, which is the ID of the created user, right?

07:21 So then you can create an endpoint.

07:23 And you say this thing accepts a post.

07:26 It is, you derive the class from controller of Pydantic serializer or controller of actors serializer.

07:33 And then for your signature of your function, you say, hey, I want body of user create model.

07:40 And then it automatically parses and validates that Pydantic style.

07:44 And then you just use it.

07:44 So you'll never get to your code if your Pydantic model doesn't validate, parse, all those things.

07:49 So you don't have to check that kind of stuff.

07:50 Pretty neat.

07:51 Yeah, that is cool.

07:52 Yeah.

07:52 So there's a lot more to this.

07:54 People can dig into it and explore it.

07:57 But it looks like a pretty strong contender.

07:59 If you bump over to GitHub, it's got about 1,000 stars, 100 forks.

08:04 Pretty active.

08:04 How old does it look?

08:05 A month?

08:07 No, six months.

08:07 It was created six months ago.

08:09 So I think that's still, that's pretty good growth.

08:11 1,000 stars in six months.

08:13 I mean, it's not open claw, but that's.

08:15 It looks like there's a lot of active development going on still right now.

08:18 Yeah, let's, yeah, a commit an hour ago.

08:20 Let's look at the commits.

08:22 What's going on here?

08:23 Four hours, five hours, seven hours, 11 hours.

08:26 Yeah, that's just, those are the active commits of today.

08:29 That's pretty solid, honestly.

08:30 Really good.

08:31 Yeah.

08:31 So anyway, I throw it out there as another Django area that people can pay attention to.

08:37 Another Django framework.

08:38 It's very similar to FastAPI and very similar to Django Ninja, but it seems like it's a little more flexible in the way you work with it.

08:46 Yeah.

08:47 Interesting comment here from SendPos.

08:51 Django Ninja, Django Bolt, Django Modern REST.

08:54 I guess people get back to Django and enjoy it.

08:57 Nice to see.

08:57 And I think that has, I think there's a lot to be said for using LLMs and AIs because Django's been around for a while, so they know how to deal with it.

09:08 Yeah.

09:09 Django is very well understood by AI.

09:12 So that's actually, that's actually a huge bonus in my mind.

09:15 Yeah.

09:16 Yeah.

09:16 Well, should we shift gears?

09:19 What's new?

09:19 Well, what's new is Python.

09:22 So.

09:22 I think it's been on for 30 years.

09:23 What are you talking about?

09:25 Well, so Python, I'm, I'm looking forward.

09:28 So there's a Hugo von Caminad.

09:32 I'm sorry.

09:33 I always mispronounce your name, but we love Hugo.

09:36 So, so 315, three, there's 315, alpha eight is out plus a release of 314, 314, 4, and also 313, 13 are out.

09:49 There's a post about that, but I, I'm looking forward to 315.

09:53 So when does 315, we're, we're looking at the status of the versions.

09:58 we get, we still have like six months to go before we can really solidify the start using it.

10:04 But, but I think that I'm excited to get started sooner anyway.

10:08 So we've got what, a beta has come out in May.

10:12 CR is in September and the final is planned for October.

10:15 But, look at it.

10:17 What, what, what's already in there all already in the alpha, we've got, explicit lazy imports.

10:23 and that we would, we've been talking about that on the show.

10:28 and that's, that's already there.

10:30 Frozen dict built-in type.

10:32 anyway, what do I have up here?

10:34 I've got, yeah, the frozen, frozen dict built-in type.

10:38 This is pretty cool to be able to, to by default do a, like a dictionary that can be used as that's hashable.

10:45 You assign it at instantiation time and, or when you define it and, and you can't change it after that.

10:52 so it's hashable.

10:54 So that's, that's pretty cool.

10:55 Yeah.

10:55 And one thing I'd like to add to this frozen dict that I think is super interesting, we've had other frozen types like set, I think, and list, but in this Python T, the free threaded

11:05 Python world, one of the things that can really unlock concurrency is not having to worry about locking on different objects.

11:13 If you work with frozen dicts, it's read only, and you can just have all the threads ram on it all at once, right?

11:19 You don't have to worry about locks once it's created.

11:21 So people just consider adopting immutable data types in general when possible.

11:27 Like if you're creating a dict, but you're not going to change it, frozen dict seems like something cool to put in place that like adds a little more security.

11:34 So like if you don't expect it to change, like you can set it up now.

11:37 So it cannot change.

11:39 Yeah.

11:39 And even things that generally we think of changing, we, you can use data flows like algorithmic stuff to do.

11:45 That's functional.

11:46 Like, like you said, it's a functional model for some part of your system that can be easily async asynced because it's all, it's all immutable types.

11:55 So yeah, pretty cool.

11:57 We got unpacking of comprehensions better.

12:02 So that's kind of fun.

12:03 Star and star star are working better for that stuff.

12:07 Let's see.

12:07 What did I have here that I wanted to talk about?

12:09 I don't remember.

12:10 Anyway.

12:11 Yeah.

12:12 Lots of great stuff.

12:14 Let's see.

12:14 Annotated type forms.

12:16 Oh, Python now uses UTF-8 as default encoding.

12:19 Can't wait for that because I'm tired of typing UTF-8.

12:21 Did you shout out lazy imports?

12:25 Yeah.

12:26 Well, explicit lazy imports.

12:28 That's, that's probably what I'm most excited about is, is being able to just say, just say lazy import Jason or lazy import whatever.

12:37 And it doesn't actually get imported until somebody actually uses it at runtime.

12:41 That's going to make, that's just such a clean interface and it's going to make everything, a lot of stuff so much faster.

12:49 In my world, it's the testing stuff because so much you, because pytest imports everything to start with and, but it doesn't need anything.

12:58 Like the tests only need this stuff when they're, the tests are actually running.

13:02 So having test runs will be a lot faster with lazy imports.

13:06 And can I add that that's a little bit of foreshadowing right there?

13:10 Is it?

13:11 It is.

13:11 Carry on.

13:12 Okay.

13:12 Okay.

13:13 No, I just, just a lot, a lot of exciting stuff going on in, in 3.15 that I'm, I, I'm art.

13:20 Oh, I'm already excited to play with it.

13:23 And, and it used to be, I think like, I don't even remember how far ago it was where it was sort of hard to grab an alpha release.

13:31 But now with uv, I just said uv self update and uv Python install 3.15 and bam, I had the alpha.

13:39 So it's pretty great.

13:41 That's awesome.

13:41 Yeah.

13:41 You can even just say uv, V, E, and V, and just say, you know, dash that, like give it a version of the Python.

13:47 And if you don't have it, it'll just go and say, okay, we're getting 3.15.

13:50 Yeah.

13:51 Pretty cool.

13:52 Yeah.

13:53 Anyway, that's all I wanted to say, but just, I'm excited about 3.15.

13:57 Cool.

13:57 I am excited about 3.15 as well.

14:00 And that is because I just, I'm excited about a lot of the things there, but I'm extra excited about this, this lazy, this PEP 810 lazy imports, because I've discovered that it can make a mega

14:13 difference and it lets you write a lot of clean code, a lot of cleaner code than you would right now.

14:17 So I wrote an article that was sort of a, a guide of some project I did a couple of weeks ago.

14:25 And I would have talked about it last week, but we skipped last week because I was at a conference.

14:29 So we're talking about this week and the title of the article is cutting Python web app memory by over 31%.

14:36 And that's across the entire server, running Python bytes, running talk Python, Talk Python Training, all those things.

14:44 So I just sat down and said, you know, it's kind of ridiculous how much memory these apps use, like Talk Python Training alone.

14:50 I'm just going to focus on that, but like apply to this, to most Python web apps or APIs.

14:55 It alone was using one point, almost 1.3 gigs to run.

14:59 Like that seems a little ridiculous.

15:01 And there's a separate little search daemon process.

15:03 And it was using 700 megs.

15:06 Just chilling.

15:06 Why do you need so much memory?

15:08 Bad Python app.

15:09 Who wrote you is what I want to know.

15:10 So I set about the process of going like, at least let me understand what, where this memory is going.

15:17 And if there's any way that I could do something to make it better.

15:20 Okay.

15:21 It's not like we were running out of memory, right?

15:23 I have a 16 gig server running in the cloud and I think it was using nine or 10 gigs.

15:30 So there were six gigs left, but at the same time, it's what if I want to run other apps?

15:34 Like I want to self host something that would maybe power the web apps or, you know, be like a CRM or some other thing that I just want to run and not have to set up other infrastructure.

15:43 It'd be great if there's like, oh, there's so much RAM.

15:45 It doesn't even matter.

15:46 You know what I mean?

15:47 And RAM by far is by far more critical and scarce than CPU.

15:54 Not, you know, put this RAM crisis aside, this stuff that AI is triggering.

15:58 Just straight, you get 16 gigs RAM, you get eight CPU for most people, most workloads.

16:05 The CPU is pretty chill and the RAM is a lot higher.

16:08 You got to get a lot of traffic before CPU becomes the problem, right?

16:12 So thinking about the RAM, I think is important.

16:14 So I started working on this and said, well, what can I do?

16:17 And the starting point was 100, 1,280 megabytes.

16:22 And the little search daemon thing that I told you about that once an hour, once every day, I can't remember the schedule, I think it's a few times a day, it'll pull all the content of Talk Python training and turn it into a search engine.

16:33 So like you go over here and you're like, hey, I'm interested in, you know, taking some class.

16:38 And in your class, you can say, oh, I'm not logged in, but you could just go over and search and say, well, do you talk about pytest?

16:43 Let's see.

16:44 Well, why, yes, we do.

16:46 And it has like all these really nice, deep understanding of like the hierarchy of stuff.

16:51 It's not just like a regular search, right?

16:52 Like it's something I put together, but it's still, this is a ridiculous amount, 700 megs for something that just like reads from the database and writes to the database and otherwise is doing nothing.

17:01 So I'm like, well, how can I do this better?

17:03 The first thing I did, there's five things I'm going to talk about.

17:06 Number one is I was running two to three worker processes to scale out web requests because everything was still based on Pyramid.

17:15 It's synchronous.

17:16 It's WSGI.

17:17 I'm going to have fewer worker processes than I really want to have better concurrency in the one worker process that's there, right?

17:23 I don't want like one slow request to be just like, well, that's it.

17:27 The site's not responding, right?

17:29 Because of the kill.

17:29 I decided the first thing to do was to rewrite everything in Quart and it could be other languages, anything that's async.

17:38 I could have rewritten in FastAPI, but I really like the Flask model and Quart is the true async version of Flask, right?

17:45 So that's what I did.

17:47 And that let me turn it, turn the other worker process off, which right there just cuts your memory straight in half, right?

17:54 Because they both use the same amount of memory and there's two of them.

17:56 And you know, people think like, oh yeah, whatever.

17:58 Like Michael, your site's just a, it's just a blog.

18:00 Like dude, what are you talking about?

18:01 Like you seem to think like there's a lot going on here, but I work on a real app.

18:05 So it doesn't apply to me.

18:06 I ran my little tallyman thing against Talk Python training, not the, any of the podcasts of just the courses.

18:12 178,000 lines of Python, 300,000 lines total.

18:16 Like that's enough to like spend some time figuring out what's going on, right?

18:19 That's complicated enough for most apps I imagine, at least to be somewhat representative.

18:23 But you're like running courses and podcasts and stuff.

18:26 This is more complicated than just a blog.

18:28 Yeah, that's true.

18:29 That's true.

18:30 Yeah.

18:30 But tell Reddit that.

18:31 So then the next thing, the number two was I'm going to rewrite this in that the raw plus DC design pattern that I talked about just using straight queries, not ORMs or ODMs.

18:43 Although I have something interesting to say my extra about that, but still.

18:46 So just straight queries and then mapping data classes, or it could be pydantic or adders, right?

18:50 It doesn't really matter.

18:51 Pick a model, a simple data model.

18:53 And that actually made a pretty big difference.

18:55 That dropped 200 megs, 100 megs per worker off just switching away from an ORM to just raw queries and data classes.

19:04 Makes sense.

19:05 And it almost doubled the request per second, which is wild.

19:08 Okay.

19:09 So then number three, well, once I had done the court thing, I was able to just tell Granny and like, I want one worker or not two or three.

19:18 For a while it was three or four.

19:19 And I've been like dialing it back.

19:20 It's just got faster.

19:21 So that saved 500 megs there.

19:23 So now we're down to, yeah, we're down to 530 megs for this process.

19:27 And then for the search thing, what was happening is it would load up a bunch of imports, PEP 5, 8, 8, 10, getting exciting here.

19:36 It would load up a bunch of imports and then it would run a bunch of code that would like get all, I don't know exactly where the memory was going, but a lot of stuff that it would work with by doing, interact with the database and so on would get cached or just left in memory.

19:49 So it was using 700, 708 megs of memory.

19:53 So I said, well, what if like really the main core loop of the app is just start, look at a timer after a certain amount of time, run a really complicated set of queries, and then

20:05 write a bunch of structured data indexed back into a certain structure so I can query ultra fast, right?

20:11 That, that main part doesn't need, it was importing like all the library, like all of Talk Python training, which would pull in everything that Talk Python training's main dunder.

20:20 And it would pull in, which would pull in all the libraries that every, you know, it would cascade into this mega import.

20:25 So I said, well, what if you just had the loop and then in a separate file, you ran, you would start a process that ran that separate file that did the indexing and then stopped and like just finished, like, cause it didn't need a response.

20:38 It was like, okay, I'm done indexing.

20:39 And that temporary sub process would be the thing that did all the imports.

20:44 And when it shuts down, those imports go away.

20:46 So that took it from 708 megs to 22.

20:49 That's great.

20:50 That's insane, right?

20:51 And all I had to do is change where, sorry, go ahead.

20:54 The sub process is still getting like 700 megs or whatever, but, but it goes away.

20:58 But for like 30 seconds to a minute, not constantly.

21:01 Right.

21:02 And these spikes are, I mean, they're fine, but it's like, you know, not everything is spiking in memory at the same time, just like they don't in CPU.

21:07 It's just like, it's not a fixed cost, which is pretty interesting.

21:11 You know, the work was just like, I'm just going to do the index, move the indexing function to a new file, move the imports that it needs to a new file and just do a sub process to call it instead of calling it directly done.

21:23 Yeah.

21:23 And it made, I don't know what that division is, but like 20 times, 30 times better.

21:27 You know, it's incredible.

21:28 And the last thing, this one I think is going to surprise people.

21:33 And this is the one that really hits the point home for lazy imports.

21:37 If you type the words import Boto3, because you're doing something with S3 or something similar, your working memory goes up by 25 megs per process, per worker.

21:47 If you type the words import matplotlib, your working memory goes up by 17 megs.

21:51 If you type import pandas, your memory goes up by 44 megs.

21:54 Those three imports right there are almost a hundred megs of memory usage.

21:58 Yeah.

21:58 It's a lot.

21:59 So are they needed?

22:00 If you're doing core data science, they are.

22:02 But for me, there's like an admin section where I can go and view some reports.

22:08 If I view the reports, I run, I use these libraries, but if I don't view the reports, which I really don't look at them hardly ever, just like maybe once a month, like, Hey, I wonder what that looks like.

22:17 Let me go hit it.

22:17 It pulls all that stuff in and then it generates the report.

22:20 But the worker process recycles a couple of times a day.

22:24 Yeah.

22:24 So even if I view the report five hours later, that stuff's unloaded again and I get a new version and it's not another month until I load up that a hundred megs.

22:32 So I went from like 500 megs to 450 just by saying, well, instead of importing the top of the file, let's import in the function that generates the actual picture, you know, the report that I need from this.

22:44 And boom, a hundred megs less memory usage.

22:46 And if that had the word lazy in front of it, I wouldn't have to rewrite my code.

22:50 It would have effectively the same behavior.

22:53 It wouldn't import until I actually run the function, but I could do PEP 8 magic and put it at the top.

22:58 What do you think of that?

22:58 That's pretty cool.

22:59 So now I'm thinking that like lazy imports, it imports when it needed, but it doesn't, it doesn't ever unimport.

23:05 And I'm wondering if like a future Python will add like, you know, some, something to the lazy import that like caches stuff out of memory.

23:15 Right, right, right.

23:15 Like the runtime could see nobody is caring about it being imported anymore and no one has set a value on it.

23:22 So maybe it could just go away a hundred percent.

23:24 But that means, I mean, people I think often think of lazy import as something that's a speed, a speed up like, oh, it's faster because you don't have to do all the imports until you use them.

23:35 I'm sure that's true.

23:36 I don't have numbers around it.

23:37 But what's really interesting is there are some imports that are mega in amount, how much they actually increase your working memory.

23:44 If they're lazy and you don't use them very often, they will not run very often.

23:48 And I think it'll actually make a pretty big difference.

23:50 So, you know, like you just write code, like just do import inside the function instead of at the top.

23:55 Like your lenders will go, you shouldn't do this.

23:57 Like you leave me alone.

23:58 I'm doing this.

23:59 This is, this is really good for me.

24:01 So the final thing, I just moved a bunch of caches to disk caches instead of memory caches, which is good.

24:07 And so I put a little picture in there, but it, it saved a ton of memory across 3.2 gigs, less memory used on the server by applying that to Python bytes, talk Python and Talk Python Training.

24:19 There's a bunch of other apps, but they, oh, and the search thingy.

24:23 But yeah, those were the four big ones.

24:26 Cool.

24:26 Pretty cool, huh?

24:26 Yeah.

24:27 Yeah, it's very cool.

24:28 So a little bit long there, but I thought that people might appreciate some kind of a roadmap and how you can do that yourself.

24:35 Like, so the total savings was 190, 1,988 megabytes to 472 megabytes used across the apps.

24:44 That is a lot of difference and you can get a way, you can run more apps.

24:49 You can scale out more and get way better performance because now you can like run more workers if you really needed to or whatever.

24:55 I think there's a lot of, a lot of benefits there.

24:58 So cool.

24:59 A lot of excitement out in the audience.

25:01 They're talking about this topic.

25:03 I think it's cool.

25:04 All right.

25:05 Well, one of the things that I get excited about is testing.

25:07 You don't say.

25:09 Pick that up about me a little bit.

25:12 So I want to talk about trike right now, like as in a tricycle.

25:17 So this is a new, a new project.

25:21 And how new?

25:22 So it's got four stars, but it, I mean, it just went up like last month or something.

25:29 Very recent.

25:30 So taking a look at this, this was submitted by the person that created it, Justin Chapman.

25:37 But, but I'm, you know, I like the idea of like thinking outside the box.

25:41 So for testing.

25:43 So this is trike is a rust based Python test runner with a just style API, which is, so I'm not, I'm not familiar with just, but that's what said, said JavaScript just.

25:55 I don't remember.

25:56 Anyway, maybe.

25:57 I don't know.

25:58 I think so.

25:58 Yeah.

25:59 So yeah, you, you can tell how Python focused I am most of the time.

26:05 But so what is it going to look like?

26:07 It's let's, let's zoom in a little bit.

26:10 Getting started.

26:12 So the, it, it looks like it looks way different than by tests.

26:17 So we've got a, what a, let's say we've got a normal function that's add, for example, and we want to test that.

26:24 We'd say like with describe and add.

26:27 So with describe and then some comment with this, like your test name, I guess.

26:32 And then decorators of tests.

26:35 And then another, I think that's just a description, but it's saying.

26:39 Yeah.

26:39 First I thought it was a string that was being parsed to make it run.

26:42 Yeah.

26:42 Whereas this one plus one.

26:43 I think that's just the message that comes out.

26:44 Yeah.

26:45 It's just the test case.

26:46 And so this is a very basic, we're going to get more from Justin to describe this.

26:51 I've like reached out to him and said, this is really interesting.

26:53 I'd like to know more.

26:54 So I'm going to do more research.

26:55 I haven't really played with this yet, but, but I, but I, I'm intrigued by it.

27:01 I kind of like pytest.

27:04 pytest is, has, uses just assert.

27:07 And for async for, for things that soft asserts, I guess I use the, my pytest plugin called check, pytest check.

27:15 But this is different.

27:16 All by default, all of these are soft asserts.

27:21 So it doesn't stop the test.

27:22 You can, you can expect a lot of things.

27:24 It uses the expect keyword.

27:26 So what do we got here?

27:27 We've got a watch mode so that watches, watches to see if, if you have new things to, to test.

27:33 Native async support, fast test discovery in source testing.

27:37 So you can, the ability to just put tests right in the source code, instead of having to have a separate test.

27:44 You can do that with pytest, of course, but mostly people don't.

27:47 Doc test support, kind of like pytest.

27:49 Client server mode, which is, that's, this is an interesting one.

27:52 So the client server, they're all interesting, but this idea that you can have a server running.

27:58 So why would you do that?

27:59 So one of the things like I was just talking about for memory wise, if you run pytest, it has to import everything, imports a lot of stuff, and then you're running tests.

28:06 The server is just doing that so that it's, it's, while you're doing watching, it's, it's done that.

28:11 It's got a warm cache of everything so that, that the individual tests can go faster.

28:17 So you get client server mode, you know, pretty assertion of diagnostics.

28:22 Of course, we would expect no less from a new test framework.

28:25 So does it basically parse all, do the test discovery once and then just rerun it?

28:31 Unless, is that, is that why that, then?

28:34 I think so.

28:35 So it's doing, like, for instance, it's doing a changed mode.

28:38 Oh, like there's, I think it's for picking up new tests.

28:43 So as you're like, if you're doing test and development and you're modifying a test and modifying code, it'll pull in the stuff that changed into the, into the server.

28:52 But it can probably skip discovery or limit discovery, like the change file or something like that.

28:59 Yeah.

28:59 That's interesting.

29:00 Well, right.

29:00 And it's using, I think it's using Git information to find out which, which, which elements have been modified and are, yeah, why not?

29:09 Most people are using Git anyway.

29:10 So I think that's how it's using it anyway.

29:13 Oh, interesting.

29:14 I commented that I just sent him information about pytestCheck and, and he added that to the, Oh yeah.

29:21 Soft decisions like pytestCheck.

29:23 Pretty cool.

29:24 And I'll throw out there that I also, I'm plus one for them on the fluent API instead of the raw assert.

29:32 I really like the expect this to equals dot.

29:37 And like, you kind of like put it together as kind of an English like sentence where, you know, you can say like in list and give it an item and a list or something like, rather than, you know, just doing the exact rule.

29:50 I don't know.

29:50 I like that fluent API.

29:51 Do you?

29:52 Okay.

29:52 Yes.

29:52 Cause that's not something.

29:54 I don't know.

29:54 No, it's not weird.

29:55 It shows up a few times in a, I've seen it in a couple of different test frameworks and it's, and even some, there's a, there's a, there's an extension for unit tests to be able to do this.

30:05 And then I think I've seen pytest extensions to do this, but I, it's not something I am.

30:10 I could get used to it, but, it reads in English fairly well.

30:17 So yeah, that's why I like it.

30:18 It's more writing, but the truth is if your editor is not auto completing when you type two, you're probably doing it wrong, right?

30:25 You're not writing all those words, those characters you're, you're selecting them from a short list.

30:29 Yeah.

30:29 And for me, I like the readability rather than looking in a search statement and go, what is it really trying to get out with this like combination of something that resolves to a Boolean?

30:37 You can say like, you know, expect this value, like to be in the list or to not be in, you know, like something like that.

30:44 Right.

30:45 Or, yeah.

30:45 I don't know.

30:46 I like the readability of it.

30:47 I would just, just notice also this is made by Zensical, tie into previous conversations.

30:54 At least the website is.

30:55 So, anyway, I think that I'm, I think there's some interesting, interesting work here.

31:02 I like the async support.

31:04 I definitely want to play with this because I think that's, it's an interesting idea.

31:08 so anyway, cool, cool.

31:12 Well done.

31:12 It's very new.

31:13 We'll have to, we'll have to see where it goes.

31:15 and I sent him some, I haven't, I'm sorry, Justin, I haven't, haven't read your reply yet.

31:22 He's, he already responded to me with some questions.

31:26 I sent some questions yesterday.

31:27 kind of like I was curious about startup.

31:29 Apparently there is a setup, setup, tear down sort of fixture.

31:33 Like there is a fixture feature.

31:35 I just haven't figured it out yet.

31:36 So it's somewhere in the documentation.

31:38 so anyway, exciting things and I'll keep, keep an eye on this, keep, in, on this space.

31:44 So indeed, let me throw out a meta topic before we get to our extras, Brian, sort of inspired by this, but more broad.

31:50 I know people are hesitant to adopt new frameworks, like a new testing framework or a new web framework or a new database thing or something.

31:57 But with the agentic AI stuff that we have these days, if you pick one and you're like, Oh, it turns out it's no longer updated or I don't like it anymore or whatever, you know, it's so much easier to just go make that back into one of these or, or convert it onto the next thing.

32:12 Instead of having some huge, like, Oh no, now we've got to take, you know, two weeks and we're all rewriting the tests.

32:17 Like you probably could get two from pie dust pretty quickly.

32:21 Yeah.

32:21 Yeah.

32:22 We probably could.

32:23 And also, yeah, that, that, the questionable thing that the scary thing is like, like, this is pretty new.

32:30 Is it going to stick around?

32:32 Like, that, that, that's the big one.

32:35 A lot of people have excitement around something.

32:38 I was a little bit, so I took a look at this is under the Jay chap.

32:44 and he's the current CTO of a new V.

32:48 So he's got a, he's probably using this at work, with his, on his day job.

32:53 So this is a good thing.

32:54 he's contributed to ty and uv, which is kind of cool.

32:59 So that's cool.

33:00 There's some hints that maybe this will stick around and, maybe in, especially if he's using it, on a regular basis, he's probably using it and supporting it himself.

33:09 So, a little bit more.

33:11 Yeah.

33:12 So I do check these, these sort of things out.

33:14 I've seen a lot of new projects that are, probably assisted by you, AI to get created.

33:22 It might just be a fun toy for somebody.

33:24 And that's not something I really want to cover in this, on this podcast, but something that looks like maybe they're serious about it.

33:30 Yeah.

33:30 We'll cover it.

33:31 It doesn't matter.

33:32 So cool.

33:33 Yeah.

33:33 It looks very, very neat.

33:34 All right.

33:35 I've got a handful of extras.

33:37 You want to hit your extras next or go ahead.

33:40 Go ahead.

33:40 Okay.

33:41 let's see.

33:42 I saw this came up in a couple of newsletters and I was intrigued by an article called why aren't we uv yet?

33:50 talking about it did, did some analysis of different, not sure how they got them, but like some top Python projects, stack over.

33:59 I don't know, that, you know, uv is popular.

34:02 Why isn't it being used more?

34:04 The interesting thing, I just, I'm not, I will, we'll link to it of course, but, I think one of the reasons why this, this gap looks so big is that a lot of people with requirements.txt are, are you using, are still using UV?

34:18 we, I have a lot of projects at work that, are requirements.txt based and everybody I know, I'd like that our instructions are to use uv, that you just because it doesn't,

34:29 we're not publishing, uv lock file doesn't mean that we're not using uv.

34:35 So every single one of my projects, no, that's not true.

34:38 Almost every one single one of our, my projects has a requirements.txt and no uv lock and it's all uv.

34:44 Yeah.

34:45 So I don't know if the assumptions of this study are correct is all I'm saying.

34:50 Yeah.

34:51 That's a really good point.

34:51 I very much prefer requirements.txt with pinned stuff and like then the uv lock.

34:56 I don't know why I just do.

34:58 Well, I don't know why.

34:59 Well, right now I, I'm, I'm leaving it open for the develop, the developer to choose, if they want to use uv or not.

35:06 Yeah.

35:07 we probably get to the point where we're not making that a choice, but, anyway.

35:12 I find that requirements.txt diffs a little nicer.

35:15 It's like, it's easier to read and especially get diffs to look and go, oh yeah, okay.

35:19 This is what changed.

35:20 Whereas the uv lock has got like so much, especially with the hashes and so on that it's like, ah, you know, there's so much noise in the uv lock versus the requirements.txt.

35:28 So yeah.

35:29 But I, I'm, I, mine are pretty noisy though.

35:32 Cause I'm using like, requirements.in and, like using uv to publish a more detailed requirements.txt.

35:41 Yeah.

35:41 That's, I do the same thing, but I don't include the hashes.

35:43 Maybe I should.

35:44 I don't know.

35:44 I think according to listening when I was listening to Python bytes recently and we are supposed to save ashes there.

35:51 Okay.

35:52 what else?

35:53 the PyCon us, talk schedule is up.

35:57 if you're going, you can check it out.

35:59 One of the things that noticed that I knew, I knew as a, as a submitter, I did notice this, but there's an entire AI, channel, like what are they track an AI track?

36:10 And I don't, I don't know how I feel about that, but yeah, whatever.

36:13 I think I'm excited about it.

36:14 Honestly.

36:15 Oh yeah.

36:15 Okay.

36:15 Yeah.

36:16 I think so.

36:16 I won't be there, but everybody that does, you're going to be there.

36:20 I think you will be there in spirit and I will hand out Python bytes stickers and they will be carrying some of you with them.

36:27 Cool.

36:28 let's see.

36:29 Oh, I wasn't going to cover this, but now that I've already have it up, Justin Jackson, has a, what has technology done to us?

36:37 blog post.

36:38 I was just reading about that recently.

36:40 Oh, that's it for my, Oh, I have one other extra.

36:43 Here it is.

36:44 the lean TDD book, I hinted at that earlier.

36:48 I, in a couple of days ago, put out version 0.6.1.

36:55 I'm still on, on track.

36:57 So I, this is really close to what I want to read.

37:00 So I'm, I, when I, I'm taking a trip business trip.

37:04 And then when I get back from the business trip, I'm going to start recording the audio book for this, but I've got new, new cover art with the, little rocket.

37:10 I like rockets and, I'm pretty excited about the state of this right now.

37:16 I'm happy with the flow.

37:17 the first, the first iteration of it, I didn't enjoy reading it.

37:21 And why would I want somebody to buy a book that I'm not enjoying reading it?

37:24 But now I'm like reading it all the time.

37:26 I don't want to read other people's books anymore.

37:28 I'm, I'm liking my, anyway, enough about me.

37:31 that is my extras so far.

37:33 Awesome.

37:34 Awesome.

37:34 Congrats on making progress on the book.

37:36 So I was going to cover that three 14 four Python three 14 four is out, but you kind of already talked about that before.

37:42 Well, we just zoomed by 14 four though.

37:45 So I'm glad.

37:46 Yeah.

37:47 There's some nice stuff out here and I'm not going to go into detail.

37:49 It's just an extra and all those kinds of things, but there are fixes CVE such and such.

37:54 There's two, two security vulnerabilities address.

37:57 Actually three.

37:58 Sorry.

37:58 I missed one.

37:58 So at least, there's at least three security CVE things fixed and just three 14 four.

38:06 And there's a couple of other security issues as well that don't seem to have CVEs.

38:10 So that's alone.

38:12 It's probably worthwhile.

38:12 So I will instead tie back to your, why aren't we uv are, why aren't we uv yet?

38:18 And just point out that if you just type uv Python upgrade, it will do now an in place upgrade of three 14 three or two or one into three 14 four.

38:29 And so your virtual environments and all that stuff just should pick that up at the bank.

38:33 If not, at least a minimum of you can now recreate the virtual environment with it.

38:36 Yeah.

38:37 Well, not only that, any of them that you have installed, if you've got all the, all the versions of Python, you've got to install.

38:43 Well, if you say uv Python upgraded upgrades all of them.

38:46 So yeah, it's excellent.

38:47 And in true uv style, it does it in parallel because it should.

38:51 Yeah.

38:51 Actually, after you mentioned that, I just went over and clicked and did it and it's done already.

38:56 So yeah, that's awesome.

38:58 All right.

38:58 One more release.

38:59 I just want to give a shout out to you.

39:00 I've been kind of like bagging on Beanie a little bit, although I'm a huge fan of Beanie.

39:04 Bagging on Beanie.

39:05 Beanie bags.

39:06 Just saying, like, remember, I've been talking about my raw DC pattern.

39:09 Like there were two reasons I was moving to it.

39:11 One, because I think AIs are much, much better understanding raw query syntax instead of ORMs or ODMs wrapped around something, which wrapped around classes, which, and then results somehow sometimes into raw queries, like that kind of thing.

39:25 Yeah.

39:25 So I've been talking a lot about that and so on, but also that it was some of the libraries I was using were no longer updated.

39:31 And I'm like, ah, that's such a, such a hassle.

39:34 Like you looked at the releases for Beanie and you waited for that to come out.

39:37 You'd see that like seven months ago, it was like, oh yeah, we fixed a couple of things.

39:44 And then a while ago, we made some changes that introduced like some weird breaking changes, but like we kind of fixed the breaking changes later.

39:51 You know, it was not really getting a lot of love.

39:53 So there's actually a major release to Beanie that has like a ton of fixes and a ton of contributors, a ton of stuff.

40:00 So if you're using Beanie 2.1.0 is out.

40:02 You should definitely check that out.

40:04 Nice.

40:04 Yeah.

40:05 I feel like we got a joke maybe.

40:07 What do you think?

40:08 You want to take this one?

40:09 Yeah, I'll take this one.

40:10 But on the topic of updates, I am, that's one of the things with agents that I didn't realize that I was going to enjoy is that I think I want to write it, write this up for next week or the next time we record.

40:25 But the maintaining an open source project is easier now with, when you can offload some work to an agent.

40:34 I'm actually a better maintainer now than I was before.

40:37 Me too.

40:38 There have been some projects.

40:39 I'm like, gosh, that's kind of tricky.

40:41 I don't know.

40:41 It really justifies the effort.

40:43 Some people are asking for features.

40:44 I'm like, yeah, we really should support that new feature.

40:48 Hey, Claude, how hard would it be?

40:49 And that'll sketch it.

40:50 I'm like, okay, yeah, this is totally doable.

40:52 Yeah, I was doing some gardening this weekend and having an agent work for me while I was doing my own thing.

40:59 So anyway, let's have something funny.

41:03 And I was going to bring this up.

41:05 This is, I think, an April Fool's joke from Mother Duck, which is a great for their Duck DB company, I think.

41:13 Right.

41:13 Mother, they're the company behind Duck DB.

41:16 And this is like their commercial offering.

41:19 So you can run Duck DB better, basically.

41:20 Okay.

41:21 Well, they put out Human DB instead of Duck DB, Human DB.

41:26 And they did like Human Feet instead of Duck Feet.

41:29 And it says, blazingly slow, emotionally consistent.

41:34 The world's first human-powered analytical database.

41:37 Why pay for compute when Dave is right there?

41:41 This is just pretty darn fun that you can just, you can do pip install Human DB and import Human DB and do queries.

41:51 And it just, and it just like plays for you.

41:54 And it plays an audio.

41:55 Dave is squinting at this.

41:57 And it's like, yeah.

41:59 Can you actually install it?

42:01 Yeah, I did.

42:02 I did install and ran it.

42:04 And it's pretty funny.

42:07 Dave is, yeah, contacting Dave for this query.

42:10 The website is very complete about how this all works.

42:15 It's got in-brain storage, post-it indexing.

42:19 Suboptimal but colorful.

42:21 Each index is handwritten and stuck to the monitor bezel.

42:25 Yeah.

42:26 Hola processing, online analytical humans.

42:29 Eventually consistent.

42:31 Dave will get back to you.

42:32 He'll get back to you.

42:33 SLA is one business day.

42:35 Or three if it's quarter end.

42:37 Or five if Dave's on PTO.

42:39 We'll circle back.

42:41 SQL or natural language.

42:43 Yeah.

42:45 Dave learned SQL first, then English.

42:47 He understands both.

42:48 Just ask him anything.

42:50 So, yeah.

42:51 So, yeah.

42:52 I did pip install this.

42:54 Played with it.

42:55 And it's pretty funny to watch.

42:59 Oh, you can do.

43:00 There's examples.

43:01 So, select average salary from employee where or whatever.

43:05 It just runs it.

43:07 Borrowing Gary's ledger pad from the query.

43:10 And then you have to wait for it.

43:11 The average engineering salary is somewhere between, somewhere around $87,000, give or take.

43:17 I ran those numbers using Gary's ledger pad.

43:20 But he wants it back.

43:21 So, you know, rounding.

43:22 So, I just had a lot of fun with it.

43:27 I probably spent 20 minutes playing with HumanDB a couple weeks ago.

43:33 Yeah, that's really funny.

43:35 Benchmarks.

43:36 Oh, it's got benchmarks.

43:37 So, DuckDB is 0.003 seconds.

43:43 HumanDB, two to four business hours.

43:45 Nice.

43:46 DuckDB is pennies to run.

43:49 But HumanDB is $49 a month plus snacks.

43:53 Nice.

43:53 The vibes.

43:54 Clinical versus immaculate.

43:56 Gut feeling.

43:57 A built-in gut feeling.

43:58 That's great.

43:59 Remembers your birthday.

44:00 Dave is thoughtful.

44:01 And DuckDB will not remember your birthday.

44:03 But it probably will if you put it in the database.

44:05 Anyway, that's funny.

44:06 Oh, wow.

44:07 They've got pricing.

44:08 They have an enterprise tier.

44:10 Enterprise.

44:10 Let's talk.

44:11 Unlimited human analysts.

44:13 On-call overnight human.

44:14 We'll figure out the SLA.

44:16 Dedicated Slack workspace.

44:18 Quarterly team pizza party.

44:21 This is funny.

44:22 Dave gets equity.

44:23 Dave gets equity.

44:25 The $0 one for the free one.

44:27 Emotional support not guaranteed.

44:30 So, that's funny.

44:31 I wonder, can you just buy it?

44:33 Upgrade to Pro.

44:34 No.

44:35 Because it is a joke.

44:36 But it's funny when people carry it farther.

44:39 And they actually take a credit card or something.

44:41 Yeah, it's pretty funny.

44:43 I love it.

44:43 Anyway.

44:44 I have one really quick thought to close things out.

44:46 Okay.

44:47 That's a little bit practical.

44:47 For those folks out there that are interested in this, why aren't we uv yet study, you could look at the requirements.txt file.

44:57 And if it was generated with a pip compile sort of thing, a uv pip compile, it would say the command is uv pip compile.

45:04 It'll say generate with command.

45:05 And that command will start with uv.

45:06 So, you could parse the requirements.txt files and get greater visibility.

45:11 But only for the ones that pip compile, not install just unpinned versions.

45:15 Well, and also, if that's how they're, if they are generating the requirements.txt with uv.

45:21 So.

45:21 It would raise the numbers.

45:22 I don't know by how much.

45:24 Anyway.

45:24 Yeah.

45:25 Oh, also, so many projects or libraries that are not, they don't have either.

45:33 Like.

45:34 Yeah.

45:34 True.

45:34 That's true.

45:35 Libraries just have their dependencies in piproject.toml.

45:38 So, there's no, there's not uv lock or requirements.

45:41 You know.

45:41 The world's complicated, Brian.

45:43 Do you know what's not complicated?

45:45 What's that?

45:45 We are at the end of the episode.

45:47 So, thanks everybody.

45:48 I'm going to press the goodbye everybody button.

45:50 Goodbye everybody.

45:51 Bye.


Want to go deeper? Check our projects