We utilized ChatGPT to enhance the grammar and syntax of the transcript.
Greg: Halloween, I'm bringing the energy! You weren’t ready for this, were you?
Thomas: Absolutely not!
Greg: All right, that’s good. I love the energy. We’ve got to have the energy! So, welcome to the Upsun live stream! My name’s Greg Qualls, a wannabe developer based in Texas, and I’m here with Thomas di Luccio. Today, we’re going to do some of the spookiest things you could ever imagine. We’re going to do a...load test! I don’t know, to me, it’s fun. We’re going to break things. We’re going to do it live and figure it out. How are you feeling about this? Are you ready?
Thomas: We’ll see! I guess I’m ready for this. I prefer to break things privately, but doing it in front of people—that’s a whole different story! Let’s break things before Black Friday or other peak times; it’s less spooky. Even if today is a spooky day, wherever you’re celebrating.
Greg: Right! Some people might wear costumes; we do load tests.
Before we hop into it, we should cover some emerging news. Let’s go through it!
All right, as my screen is shared here, I’ll start with today’s emerging news. This came out yesterday—Google’s CEO announced that over 25% of Google’s code is now written by AI. Here’s my question on this because it came out in an earnings analyst call. How much of this is what we’d consider generative large language model (LLM) AI, like using tools like ChatGPT, Claude, or something similar? Or could it just be autocomplete? What do you think?
Thomas: That’s a great question. Honestly, I don’t know how I feel about this number because we don’t know if there’s an AI writing entire pieces of code, or if it’s just assisting, like autocompleting code. I mean, if something fails, maybe it’s an easy out—you just blame the AI!
Greg: True, but that’s dangerous because when the AI becomes sentient and starts taking over, well, you’re first on the list for blaming it.
Thomas: Exactly. It’s not that the AI was at fault; it wasn’t given proper instructions or resources. That’s my take—maybe we just need to wait and see if these figures keep rising over the months. It could be 25% now, and maybe in the future, it’ll be 50%.
Greg: Right! It’s like the setup for a disaster movie—a sentient AI taking over, line by line. And from what I read in the article, this news felt like an “earnings grab”—like Google felt they had to mention AI to get attention, even if there wasn’t much else to report.
Thomas: Yeah, exactly. They want AI to be perceived as safe, like, “Hey, we use AI, so it’s safe to use!” It’s a bit of reassurance, I guess.
Greg: Interesting perspective! Anyway, what do you have for us this morning?
Thomas: Mine isn’t a tech news piece—at least, not yet. This is geek news. Recently, I came across an update on the Langlands program. Ever heard of it?
Greg: No, what’s that?
Thomas: It’s essentially the “theory of everything” in mathematics, named after Canadian mathematician Robert Langlands. In physics, scientists work to unify all forces; Langlands aims to unify all fields in mathematics into one framework. It’s complex, and honestly, above my understanding, but it’s incredible to see this progress in mathematics. If you’re a nerd, you’d love it!
Greg: That’s amazing! Just trying to wrap my head around a “theory of everything” in math is mind-blowing.
Thomas: Exactly, it’s like building a framework where all mathematical fields are unified in one coherent way, allowing new boundaries to be pushed forward. It’s insane.
Greg: That makes sense—like they want an equation or framework that all other equations can build upon, because it’s been proven to work. That’s awesome.
Thomas: Yes, and whatever field of mathematics you’re in, this unified way would solve everything. If anyone finds mistakes in my explanation, let me know, but that’s the general idea.
Greg: Cool! Maybe we can move to the "stash of the day"?
Thomas: Absolutely! “Stash of the Day” is where we showcase cool things we found that make us happy and hopefully make others happy too. I think I’ll start because, well, I’m first up.
Greg: Go for it!
Thomas: All right, so I have some interesting news for today’s "stash"—it’s a bit different, though. There’s this kind of strange news in the DevOps world around WordPress and WP Engine. It’s a whole drama, mixing business decisions with what feels like a committee of three toddlers in trench coats making erratic choices.
Greg: Ha! Have you seen the Godzilla movie where the scientist says, “Let them fight”? That’s my approach to this: just let it happen and see where it lands!
Thomas: Exactly! Every other day there’s some new twist, and it’s honestly a bit sad. But I found a good tool to deal with all the noise—it’s a Google Labs product called Notebook LM, designed for writers. It helps keep track of backstories and fact checks, like Retrieval-Augmented Generation (RAG), where you feed in some source material and it organizes it to help you remember things.
Greg: That sounds fascinating! So, with Notebook LM, you can pull from different sources and have your own generative AI to create content based on what you’ve fed it?
Thomas: Yes, exactly. Writers can add documentation and sources on the left side, then ask questions or generate new material based on what’s stored. It’s super useful for anyone building something with extensive lore or data. I’m not using it for that, but I know others who find it helpful.
Greg: That sounds like fun. I can think of ways to play around with that!
Thomas: Right? One interesting feature is that it can create its own podcast, though we couldn’t get the audio working here. It’s interesting, but I don’t fully understand how it fits into the tool yet.
Greg: Maybe some people would like listening to their notes or documents as they work or commute, so they can keep up with things while on the go.
Thomas: Yeah, it’s almost like creating an audio version of what you wrote, but it generates a whole new piece of content from it. Could be interesting in some cases, though I’m still figuring it out.
Greg: I think a lot of these AI tools start as “fun to play with,” then eventually, someone finds a truly practical use for them. Hopefully, people won’t just spam podcasts with junk AI content, but actually find valuable ways to use it.
Thomas: True! We’ll see where it goes. Speaking of stash, I found something super geeky—a single command for your terminal that cleans up all local branches except the main branch.
Greg: Nice! What’s the command?
git branch | grep -v "main" | xargs git branch -D
Can you get what it does?
Thomas: It fetches all branches and deletes everything that doesn’t contain “main.” It’s great if you use a tool like Linear and end up with tons of branches. You can just clean them up in one command and leave only the main branch.
Greg: Love it. I’m going to call it “Git Bankruptcy.” It’s like declaring bankruptcy on all your local branches except main. I could use this on projects where I have way too many dead local branches.
Thomas: “Git Bankruptcy” is a perfect name! You just declare bankruptcy on your branches, delete everything except main, and clean up. I found this trick a couple of weeks ago, and now I use it whenever I have too many branches. Just copy, paste, and you’re back to main.
You know, you could even create a Git alias for it and call it “bankrupt.” Then every time you need to clean up, you just run “git bankrupt” and it’s done.
Greg: Good idea! We’re taking this to a whole new level! Speaking of main branches, I think it’s time to merge into our main topic of the day.
Yes, let’s try to break things before Black Friday breaks us! So, we were talking earlier about how to set this up, and I think Step One is to show our foundation. I’ve got my screen ready to share, and then we’ll pop over to you, Thomas, since you set all this up.
Thomas: Sure thing. Just to clarify, I didn’t do all the setup on my own—our colleague Von helped us build this Shopware application initially. I added some load balancing, but it’s definitely been a team effort.
Greg: Thanks, Von! All right, so we’re in the Upsun console now. Here’s a preview image of our demo store, where you can see branches, environments, and the services we have. This is like a cooking show—we have some things prepped just in case, but we’re also doing it live. Let’s check out the “Apps and Services” section. We’ve got the Shopware app, which I assume is the PHP app, right?
Thomas: Yes, that’s right.
Greg: Next to it, we’ve got a Python app running Locust for load testing. You showed us this open-source tool last week. Then we have caching with Redis, RabbitMQ for queues, file storage, MySQL for the database, and a PHP worker app to manage everything. Anything I missed?
Thomas: Nope, that’s it! We’re running at 3.35 CPUs, 3.75 GB of RAM, and 9 GB of storage across these apps.
Greg: Perfect. Now that we know what we’re working with, let’s transition over to you, Thomas, to dive into the load testing.
Thomas: Great! So, here’s how this demo project is set up. We have one repository with multiple applications—the Shopware application in PHP and Locust in Python for load testing. All the configuration is in a YAML file, and we can define environment variables, either in the YAML file or in the UI for security purposes.
Greg: Got it. Locust is going to simulate real user traffic, right?
Thomas: Yes. But it’s not just about flooding traffic; we want to replicate realistic user journeys. For example, I have helper functions that simulate user behavior—some users visit the homepage and leave, others search for products, some browse categories, and some make quick orders. This way, we’re not just generating random clicks but mimicking actual user activity on the site.
Greg: That’s amazing. I didn’t realize load testing could be this detailed. In my mind, it was just about sending tons of traffic, but this approach makes so much sense.
Thomas: Exactly! The goal isn’t just to load test but to identify the breaking point of your application. You want to know when your site starts slowing down and at what point it crashes, because this directly translates to the maximum revenue your site can handle during peak times.
Greg: That makes a lot of sense. So, we’ve got it all set up, and now we’re ready to run the test. What’s next?
Thomas: Now that we understand the project setup, I’ll SSH into the Locust container and run the test. I have a command to launch Locust in headless mode with specific parameters—25 concurrent users with a spawn rate of five, meaning users join in batches of five. This will run for five minutes as a baseline test. Of course, if you want to truly break things, you’d bump those numbers up significantly.
Greg: Let’s start small and see what happens. I’ll watch the resources on my side as you run the command.
Thomas: Sounds good. Just a reminder—when scaling an application for Black Friday or any major event, you want to be able to clone your production environment for testing without risking downtime. Running load tests on a clone instead of the actual production branch prevents any potential disruptions.
Greg: Absolutely. And the great thing with Upsun is you can scale resources dynamically if you see that traffic is increasing. You don’t even need to change configuration files—you just scale up or down as needed from the UI.
Thomas: Exactly! Upsun’s flexibility allows you to add resources instantly. You can also duplicate instances of the main application as needed. Behind the scenes, we add a load balancer, so even during peak loads, everything functions smoothly.
Greg: So you can do all this live, right? No need for DevOps intervention in a crisis?
Thomas: Right. You don’t have to alert DevOps and make them scramble. With a few clicks, anyone on the team can handle it. And for your services, if something is causing a bottleneck, like the database, you can increase resources without needing multiple instances, as databases require more complex data syncing.
Greg: Perfect. So, we’re monitoring the CPU and memory usage here, and I can see traffic building up. We’re not breaking yet, but it’s climbing fast.
Thomas: Yep, and if you look at Blackfire or other observability tools, you can monitor the impact under heavy load in real time. Observability is key because load testing tells you the breaking point, but observability shows you how your app performs under those conditions, where improvements can be made, and where the bottlenecks are.
Greg: It looks like the traffic is leveling off. Should we push it a bit further and really test the limits?
Thomas: Let’s do it! I’ll increase the user count to 100,000, with a higher spawn rate, and run it for 10 minutes. This might be overkill, but Halloween calls for spooky stuff, right?
Greg: Exactly! Let’s add a few zeros and see if we can finally break it. I’ve also increased our app instances, so we’re ready to handle more load.
Thomas: Perfect! We should start seeing a lot more traffic, and this will really test if Shopware and the whole setup can withstand an extreme load.
Greg: This is awesome. Thanks for putting all this work into setting up the load testing with Locust. It’s way more detailed than I expected.
Thomas: No problem. It took a few days to set up, but it’s worth it. Any application that expects high traffic should invest in load testing, not just for Black Friday but as a regular practice. Knowing your breaking points lets you have informed discussions with leadership about resources, capacity, and possible upgrades.
Greg: Yes! It’s not just about knowing your limits but being able to communicate those limits with real data. Leadership can see the cost of scaling resources for an event and make an informed decision on how to proceed.
Thomas: Exactly. And by knowing your application’s breaking points, you avoid surprises during peak times. With Upsun, once the test is over, you just delete the environment and pay only for the resources used during testing.
Greg: Couldn’t agree more. Well, I think that wraps up our load testing adventure. Thanks for joining us on this Upsun live stream. Hopefully, you found it helpful, whether you’re prepping for Black Friday or just looking to improve your app’s resilience.
Thomas: Absolutely! Thank you, everyone. Stay safe, keep coding, and we’ll see you next time.
Greg: Take care, everyone!