Mastering Cursor: plan mode, multi agent, & Composer 1

Download MP3

[JINGLE]

Cool, well we're back, Brad. It feels like it's been a hot minute since we've recorded. I've been busy doing lots of stuff, but how have you been?

I've been good, also busy. I feel like this time I want to be more chill, but it's just been working, been doing stuff on the side. Quite busy, but successful at the same time. I think there have been a lot of things I've been working my butt off for that are coming to fruition, so yeah, not too bad.

Cool, got a haircut too, I think, right?

I did get a haircut. Yeah, that was part of the change. I was overdue for that. The dream is to wake up and look the same, so the haircut gets me closer to that vision.

Wake up and look the same, what do you mean?

You know, you don't have to wet your hair. I feel like with the longer hair, it takes more effort. I want to wake up, roll out of bed, spend like 10 seconds on looking better with my hair, and then move on with my life.

It's crazy you mentioned that because I was this close, like millimeters close, to just asking for a buzz cut when I went and got my hair cut probably two weeks ago. I was this close because I was so over doing the gel, you know what I mean?

Yeah, but then I was like... It looks better with the gel, but it's just more effort. And I actually went to a new barber because I just feel like I wanted something closer. It's like one of those things where you move to an area, you find somebody, and you keep going to them. And there could be better or, you know, closer options. Since I moved up here last year, I was like, "Why am I not finding a closer barber?" So I found one that's walking distance that I can schedule an appointment with online too. So that was kind of nice. I did that recently, very smooth, very nice woman, so I'll probably be doing that again on a more frequent cadence.

Well cool, looking fresh, getting lots of stuff done. Do you want to start with some of the hard work that you've been talking about? Like, what was the hard work that you were doing?

I was hoping you would ask me that. So, big announcement: I've been working on the Split My Expenses iOS app since April. Just last week, I released the app on the App Store. So if you have your iPhone open, you can search "Split My Expenses"—all one word—and you should see it on the App Store. When it first came out last week, I think it took about one or two days to get indexed into the search results, but now you'll see it. I was looking at my stats the other day; I think in the past few days that it's been released, I've had like 40 app downloads, 30 app downloads, and then like 28.

I went to my user feedback forum, which is a public forum where users can submit feedback on Split My Expenses to ask for things. The top-voted one was "iOS app." So it felt amazing to go to that post and say, "Hey guys, here's a direct link to the App Store listing. Go download the app." I've spent so many months building this, on and off, of course, but it's taken me many, many, many hours, and yeah, it's finally out there. It's great to get downloads, great to get reviews. Right now I'm asking folks to try it out and give me feedback. Of course, it's version one. I spent a lot of time on it, but it *is* version one, so there are going to be bugs, but overall I'm extremely happy and proud of what I shipped.

I think building a React Native app for the first time was a ton of learning, and then following that release, I'm planning to do the Android app sometime this week. So hopefully by the time the podcast is out and you're listening to it, it is out on the Google Play Store and the iOS App Store. Then following that will be a bit more marketing effort to get it out there. It was a huge headache to get it out, both the coding and getting through App Store review, which is always a fun time. But once you're through all that, you're on the App Store. Usually, subsequent updates are pretty easy. But yeah, super, super happy that that's out.

Yeah, well, congratulations. A couple of things. There is also proof of Brad's complete grind mode because he sent a text at like 4:30 in the morning doing something. I can't remember what you sent, but it was some screenshot of a submission or something like that. So yeah, the grind is real.

Yeah, I totally forgot about that. So it was League Worlds that day, and I was staying up to watch Worlds, but I was also staying up to get my App Store submission in. League Worlds started, I think, at 11:00 p.m. California time and ended roughly at 4:00 a.m. California time. I was working on my app the whole time I was watching this, just trying to fix a bunch of random stuff. It was my time to go through all these tiny UI nitpicks. And I fixed all these things, but I was looking at the clock. I was like, "Oh my gosh, it's 4:00 a.m. and I'm submitting my app," and I'm just dead tired, but it was worth it. And sadly, that build didn't get approved. Apple took issue with a few things. I think I had a "Sign in with Apple" bug that they rejected me for, but it was a sprint. I was like, "I want to get this in on Saturday, so I just have more time to chill." Yeah, I took a photo at like 4:30 a.m. It was kind of an accomplishment to spend that much grind time getting stuff done.

Yeah, yeah, that's cool. So I wanted to ask you, because I know you'd talked about the SME mobile app for some time. I wanted to ask, now that you're done and it's pushed out—like you said, there are going to be things that you're going to have to still work on and tweak, of course. But now that it's there and people can download it, looking back now, how much longer do you think it would have taken you without AI? I mean, your background is as a big web developer—PHP, Laravel—and a big iOS developer, but not React Native by trade, I think. But keep me honest. I mean, you're an engineer, so you can pick it up. But how pivotal was AI in getting to where you are now with this app?

Yeah, I mean, I don't want to say it too loudly, but extremely pivotal. I think when folks hear that, they're like, "Oh, the quality of the app probably isn't good." But again, I treated it as context engineering, not vibe coding. I am coding with guardrails in place and directing AI. So when I started in April, I think it was like Claude 3.5 Sonnet was out. And I'd asked it to do a few things, but I had written that initial app back in April, spending a month or two building it manually. Then I took a break. By the time I picked it back up this year, AI was here. Claude Code came out in May, so that didn't exist when I started. But I was still using AI pretty heavily at the time to figure out how to write some of this code better. So I did a bunch of AI-assisted stuff, although it was through chats for a few months.

Then once Claude Code came out, things definitely sped up. I spent probably most of my time working with Claude Code, building out tons and tons of features that I just knew should exist but didn't have the time to do. So I would say at this point, my codebase started out manual, then I ended up using AI chat apps to copy and paste code out, which is the manual, caveman way. Then finally, I ended up with the kind of CLI tools that we use today. And from there, it accelerated my pace very rapidly. I remember back in April, I thought, "Oh, this app will take me two months." That was without AI coding, and that was a complete failure of an estimate. Because it's taken me months *with* AI coding. So without these tools, I would not have launched anywhere close to now, and I probably would have cut out half the features. So, it was an extreme level-up, and I would recommend everyone to use it. My code is still good. So it's AI-written, but it's good. There will be bugs with any software, handwritten or AI-written.

Yeah. That's cool. I was setting up an old laptop probably about two weeks ago now, and it was running Linux. I was trying to set up PyCharm and I hadn't really ever used PyCharm, but I was like, "Oh, let me just use this and give it a shot." And I'm sure there's AI in PyCharm. I don't know PyCharm that well, but I know those—JetBrains, right? Is the company that does those different ones. You use PHPStorm, or you did at some point, right?

I did, yeah, until Claude Code, I did.

Yeah. Okay. So I think you know where I'm going with this. I was trying to use PyCharm. And I honestly felt like I couldn't do anything because I was missing my Cursor auto-completes, missing my suggestions. I was just making a simple FastAPI app and I was just kind of toying around. But I was just like, "What do I... I forgot what I need to do here. Like, what do I have to do again?" Because I had become so reliant on the Cursor auto-complete and then, you know, just the agent functionality in Cursor. I was like, "Wow, I don't think I could ever go back to it." Not because I'm a coder by trade, but I was just like, "Wow, I can't imagine what engineering would feel like if you took AI away now." Which wasn't that long ago, you know? You were doing that just two years ago.

There's a lot of talk of, you know, engineering is changed forever. And I think in those key moments, I feel exactly the same, where if the internet is down or I'm on a plane and I can't use AI, I'm like, my productivity is definitely hindered by this. I can still critically think and get things done, but writing the code, like doing multiple things at once—where I've definitely elevated my workflow to have two agents on my web codebase, two agents on my mobile codebase, and even have those work together—it's a great productivity unlock. And yeah, when you remove that AI from the picture and you do software engineering, you can still do a good job and write great code, probably even better code than the AI could write. But it takes longer, takes more mental effort. You don't ship as fast. And I think in the age of AI, it's about writing good-enough code and shipping fast. And that combination is really powerful.

Yeah, that's cool. Well, you got it done. So now you get to kind of put your feet up and just relax, right?

Now I'll just relax... until the Android app is out. But yes, yes, very, very shortly. I mean, I was always concerned that no one was going to download it. I got, you know, 40 downloads the first day. I felt that was pretty successful. And I had no clue if this would end up being five a day or two a day, but I'll take the wins where I can. So now I'm working on the marketing side and the Android side, and a lot of work to be done there, but I'm not sitting on thousands of lines of code that are unreleased. Like, things will be out and bugs will be found and I'll fix them. But just spending so much time when you don't have anything to show for it, that part sucks. So once you get that thing out there, you're like, "Hey, here's what I've been working on," because people have messaged me for the past year, "Hey, mobile app, mobile app." And I'm like, "Hey, it's coming."

Yeah. Yeah, I think you had that on your bingo card, "mobile app," I want to say. I want to say you did, yeah.

Yeah. That was all in my control, so if I didn't hit that one, that'd be pretty sad. But uh, it is there.

Yeah, no, it's cool. I have two things for you, okay? One, I don't know if you remember this, but I came up with a genius advertisement for SME. Do you remember this?

I think you told me about it, but just to be fair, I do have quite a few ideas of my own, but yeah.

Hey, look, look. I know how to sell, baby. Okay. So, basically now with Sora, all you have to do is have friends go out to dinner at like a diner or a sushi place or whatever. And then when the bill comes, it's that awkward moment where they look around at each other. That's one scene. And then the next scene, the bill comes and everyone's smiling and they just pull up your app, and they do it. And no words need to be said, it's just the visuals, you know? The video with SME is vibrant and upbeat and everyone's laughing and smiling. The one without SME is awkward and unsure. It's gold, Jerry.

It's a good one. Yeah. I mean, even off that, my end-all, be-all vision is that you can take a photo of a receipt and actually describe with your voice how to split it. So if you go out to dinner with people and you're like, "Oh, this person got the pizza, this person got the steak, this person got, you know, two bottles of wine." Like just be able to describe that. And then not only does it itemize the receipt, but it applies who got what by name. Uh, that's coming. I'm going to put that on my bingo card for 2026 because that is coming, and it'll just make things easier. I feel like getting data into the app is sometimes a challenge; with receipt scanning, it's better. But you still need to say who did what, and that's tapping and, you know, kind of configuring the app. If you could just use your voice and use AI, uh, it'll be so much better. And that's kind of the pinnacle I see of fast data entry without hassle. So once that feature is out in 2026, I'm going to hit you up for the, you know, the advertisement. We'll make it happen and we'll see the results in 2026.

Yeah. The other one that you'll have to do is the um, the OpenAI Apps SDK, right? You'll have to do that at some point.

Yeah.

Like, because, you know, you could just—because I think the idea is that you'd type something like "@Split My Expenses, here's my receipt, I got this with Ben, split it this way." And like it makes an API call to your system or whatever. Like that could be something too that you have to sign up for.

Yeah, that would be cool. I don't know what their status is with all that, but I think it'd be cool to be a first mover on an AI integration.

Yeah, well, cool. Well, you know, I did a little bit of, not to the same degree, but I did a little bit of vibe coding since the last time that we spoke on the pod. Just a quick update for my world. I had been using Squarespace as my website host and builder for my accounting firm, which is Catalyst CFO. And I just hated Squarespace, to be honest with you. I think no matter who you talk to in terms of a pre-built website builder, there's Wix, there's Squarespace, there are some other ones. I don't think anyone really loves them because I think everyone has a certain idea of what they want in a website, but then it's hard to get it exactly right.

But, um, I was like, I'm over using Squarespace, I'm over paying for it. I know there's a better way to do it. And I was like, let me just kind of vibe code. Now vibe coding has a negative connotation.

It does. It does a little bit, I think. Just slightly.

Yeah. But like my site is just static pages. It's information, it's about my firm, it's about who I am. So it's not functional like a web application. But yeah, I mean, I used Cursor's "plan mode," which we can talk about and dissect. But basically, I said, "Hey, this is what I want to see, these are the pages that I want on my website, this is how I want it to come across." So I gave it a marketing point of view, I gave it the tech stack that I was comfortable with and what I'm familiar with. And then just some basic information about my firm and pricing and all that, and let it rip on plan mode. And so that was super helpful because basically with Cursor's plan mode, you can now toggle between "agent" and "ask," and then this third mode, "plan mode." I think it basically goes and creates a markdown file, and from there you can let it loose and let it do its thing. And I think from what I've seen online and just from my own experience, the output is much better when it has that kind of guiding document to go through versus constantly one-prompting each little add-on that you want to do for your website. So yeah, proud to say that I got CatalystCFO.co up and running with that.

Looks a lot better too. I mean, I think Ben sent it to me and I was like, "Wow, this looks pretty good." I don't know what kind of theme it reminded me of, but it felt kind of light and to the point, professional but kind of friendly. That's how I would put it. It looks a lot better, and I have used Squarespace myself—it kind of sucks. I'm glad you took this route. I guess I have a few questions. Is this on a VPS? Are we talking EC2? Are we talking Vercel? Where did it end up?

Yeah, so I found—you might be familiar with this, but I wasn't familiar with it until I was looking and Googling things—Render. Are you familiar with Render?

Oh yeah, yeah, yeah.

Yeah. So I'm familiar with FastAPI. I think if you've listened to this podcast a lot, you know Python's my go-to. And creating a FastAPI application and then just dockerizing it was really easy to do. And so basically I had that plan mode. I said in the plan, "I also want to put this within a Docker container," because those have always just been easier to deploy. I remember I had to do that when I was using DigitalOcean a long time ago with a web app. And so just doing that, and yeah, I basically pointed the Render web app to the repo in GitHub. And every time I push an update, it just auto-deploys from that repo. So yeah, super easy to use, very cost-effective too. I think for the tier I'm using for Augmentic, it's like seven bucks a month. And then for Catalyst, I did one tier higher and I think it's like 19 bucks a month, I want to say, where Squarespace was charging me $40 a month for one site, you know?

Nice. So you got cost savings and a better design and more control. Isn't AI great?

Yeah, and on the prompting and how to make it look, what I did on the marketing—I don't know if you call it marketing, maybe branding side—is I knew what font I wanted, I knew what color scheme I wanted because I'd had the color scheme for a while and I was trying to do that with the old Squarespace site. So basically I just told the AI as part of the branding template, "Use these color schemes," I gave it the hex codes, "use this font package," I think it's Poppins. So I was like, "I need it to be consistent with those sprinkles of branding throughout the whole site," and it did a pretty good job of that. So yeah, I was very impressed with the plan mode for Cursor.

I wanted to ask, in your experience with Cursor, are you using the auto model chooser? So when you have their model selection, there's "auto" and there's also, you know, you can pick specific models like GPT-5.1 Codex, GPT-5.1-Codex-high. What was your experience in both "plan" and "execute" modes?

Yeah, most of the time I just did auto. A couple of times I picked—like I'm looking at it now and I had selected 5.1 Codex Mini.

Oh, Mini, okay. That's crazy. Historically, I was told from some of the Cursor folks to try to use GPT-5.1-Codex-high. So there's GPT-5.1, there's 5.1 Codex, which is more built for the CLI, better tool calling. And then there's 5.1-Codex-high, which has a higher thinking budget. So what I was doing with Cursor, which I also think they have a fantastic update, kudos to them: you go through this "plan" phase and then you go through and "execute the plan." I would highly suggest selecting a very good thinking model for that plan phase. So like you described, you type out a long prompt of what you want. It'll go search the code to figure out how to do that. It'll write you a markdown file, which you can actually edit within their UI and modify whatever it has, or you can keep chatting with it to refine that plan.

Then once you're happy with that plan, you can bring it to "execute mode," kind of like the "build" we talked about. That one is really cool because Cursor has their own model right now, Composer 1, which is very fast and efficient and it's pretty darn good. I think it's trained off a Qwen model that's pretty recent. But essentially, based on the input data that Cursor gets from people using its application, they've used that training data to create Composer 1. What that gets you is extremely good coding quality with really high speed, which is really, really awesome for Cursor. I think they're trying to not only have their CLI or kind of AI IDE environment but also have a model to power it. So what I would ask you to do is probably switch your model to go to the thinking mode, which I think will produce better results. But once you're done with that, you can choose a separate model to actually code it. And what I found is Composer 1 is like 85 to 90% of the way there in terms of accuracy for building something. And it's much faster. So for everyone listening, I would definitely recommend like a 5.1-Codex-high for planning and probably Composer 1 or the exact same model for execution, because I found really, really good results with that. I think they've done a great job making the plan mode more of a first-class citizen within the entire workflow.

Do you know—not to put you on the spot, but do you know the "cycle agent count" thing in Cursor? Where you can like... okay.

Yeah, so we'll keep it in here for the pod, but if anyone's using that cycle agent count in Cursor, let us know. I was just looking at it recently. So I just updated my Cursor. I'm maybe like a week or two behind, but I just updated it. So they had the new "agents" tab and stuff like that. But when I look at my model selector, right next to it, it has like a little 1x and then I can click a dropdown and it says 2x, 3x.

Oh, that is that one. Yeah. Yeah. Okay. That's the one I was mentioning. Okay. So you can just select multiple models.

Okay. Because I've seen... No, that one is: it'll have multiple agents run the same prompt. And then you choose which one you think is best.

Okay. Interesting. Okay. I misunderstood what you were saying there. Now that makes sense, tied to that little button.

Yeah, it's a little confusing. Under the hood, it's like when you use Git, you have branches, but there's a Git worktree which allows you to have a different branch, but that branch also exists in a different folder. So therefore you can have multiple agents do different things, but they're acting in different folders and therefore you can compare the outputs without overwriting each other. But it's a little bit of a complicated workflow, so I don't actually use it because of that hurdle.

Yeah, yeah. And then one other thing I canceled—then we can move off of this topic—but I canceled Claude. I said I'm done. I gave it a shot. It was back when I was canceling Squarespace, I'm like, looking at my subscriptions. I was like, "I tried. I'm just not a Claude guy. I'm not an Anthropic guy." So I saved myself 20 bucks. I'm like, "I'm good with Cursor, I'm good with OpenAI with ChatGPT, I love Gemini, like I don't need Claude." That was my thought process.

I actually don't use Claude.ai, so their chat app, very often, but of course I'm a big fan of Claude Code. What I did notice recently is I was using deep research to find some information for some web queries that I had. When I was comparing the exact same prompt in ChatGPT versus Claude.ai, I think Claude searched 687 web articles to collect evidence, and I think ChatGPT searched like 200. I thought, "Wow, I didn't know Claude was so good at fetching the context." However, when I looked at the output from both of these deep research queries, they were very similar. So like, Claude seemed to scrape three times as many pages, ChatGPT scraped a third of that, but the results were roughly the same. I think earlier this year, Cloudflare was getting really upset at Anthropic for scraping a bunch of stuff. One query for deep research equals like 500 Google search queries. And then they go scrape all these results. So I think it still holds true that Anthropic does a lot more scraping than ChatGPT does. And, you know, with the same output, I didn't really see it was worth it. Maybe in a different query it'd be worth it, but I think that's one thing that Claude could be good at.

Yeah, and I'd be curious too, if you were to do that same thing with Google's, or with Gemini's, deep research because you have to think that Google has the leg up on how to search, you know, given that it's been the de facto. But yeah, I'd be curious because I know that with their ADK, their Agent Development Kit, that Google Search or web search is what they call it, is just a function that's part of the package. Like, it's not like, you know, I'm used to doing LangGraph and LangChain and you'd have to use a library like Tavily Search. People would use Tavily Search to look for data online and stuff like that. But when building with Google, with the ADK, it was like they had a whole pre-built function that's for web search, and it was really good from what I saw just messing around with it. So I'd be curious.

Yeah. Well, speaking of, I guess, Google—one thing on those subscriptions too, because we were just talking about it. I pay for G Suite for my business, and with that, you obviously get the email, Google Drive, and all that stuff. But then you also get Gemini 1.5 Pro and NotebookLM. And so like, that's such a good deal for, I think I pay like 20 bucks a month, 30 bucks a month.

That's not bad, yeah.

Yeah. But you compare that to Claude, where you just get the chat app. To me, Google is just, you know how I feel about Google, they're just killing it. Um, but yeah, speaking of Google, I guess, there's some potential big news coming soon, we think. Do you want to elaborate on what that is to kind of wrap this one up?

Yeah, if you're on Tech Twitter or X now, my feed has been inundated with talk of Gemini 3.0. As far as Google CEO Sundar Pichai talking about it and quote-tweeting various people talking about Gemini 3. So it's coming out this week, supposedly tomorrow. We're recording today on Monday, November 18th. The supposed drop is Tuesday, November 19th. I think there are even betting markets deciding when it's going to drop. I think Google stock was even up like 5% today, so it's very, very hyped. I have no clue if that rise in Google stock today has to do with the speculation of Gemini.

But I think what I've been hearing is that it's a game changer. Oh my god, AGI. Like, I don't know how to put my finger on the exact details of why it's better. But from what I've seen, there were supposedly a few people who had access, and it's become such a meme at this point, I can't tell who's had real access and who's just adding fire to the flame. But these people are really touting Gemini 3 as being a large leap in intelligence and code-writing ability. So we'll have to see.

I'm hoping—this is kind of my stat sheet—I'm hoping for like a 2 million context window. I'm hoping that it beats everything else out on all the latest benchmarks. Again, there are so many benchmarks out there. It feels weird to point at one, and I couldn't say I'm an expert at any, but I'm hoping it has much better tool calling because Gemini CLI always fumbled on calling tools. That's where Claude's model really shined; it would call tools successfully like 98% of the time. And when you're working in Cursor or Claude Code, you need to be able to call tools without failing. So I'm hoping, yeah, big context window, great intelligence leap, ideally not too slow. I've tasted the Composer 1 of the world and I feel, man, it'd be great to have a fast model. And then lastly, the Google pricing. Keep the Google pricing because that pricing is dirt cheap. I use their models for Split My Expenses because you just can't beat the Google pricing. So yeah, hopefully tomorrow is a big day and Google releases it. We get access. There's no staged rollout or anything. And what I'm going to do is just update my models to Gemini 3.0—3.0 Flash, 3.0 Pro. Hopefully that's here. Really excited about it.

Yeah, I was just as you were talking, I was just trying to see if anyone has any estimates for what the leg up will be. And from what I'm just seeing right now, it doesn't seem like anyone has a really good idea other than just hearsay. But yeah, on Twitter, every now and then for the last month, honestly, maybe two months, on a random day, Gemini will be trending. I'm like, "Oh, what's going on?" And people will say that it's been released. And it's just like bot farms basically trying to get eyeballs.

I've seen that too. Yeah. Yeah. Yeah, that's crazy.

Yeah. Um, but um, yeah, I mean, that's something that's really cool to see. I don't know, you know, will we hit a point where—we've talked about this before—of how much intelligence is enough? You know, like will we get to a point where it's about as good as it can get in this current cycle, and now it needs to be, like you said, focused on speed and stuff like that? Um, so I'll be curious from the intelligence perspective, is there anything that it can do just in layman's terms? I don't care that it can do a rocket science research report, but just in layman's terms, what can it do that it couldn't do in the last version of Gemini? You know, that's what I'll be interested to see if they announce it, you know, like what's the takeaway, right?

Yeah, it'll be hard. I feel like a lot of those things come out after the fact too, where, you know, a really intelligent model comes out, people use it for the same old, same old. But those new use cases really take some time to unlock. So yeah, I'm curious to see what people take a spin with. I think they put a lot more effort onto Gemini CLI, which is good. OpenAI's also put a lot of effort on Codex, their CLI, which is great. So like, we're getting to the point where, yeah, intelligence is high across the board. The CLIs are converging and, you know, plan mode is here to stay. Like where do we differentiate? Well, you know, speed, intelligence, hopefully is eked out by someone. And then cost. And again, I think Google with the TPUs, they are set up for success. If they can build the right model that harnesses their own infrastructure and large data pipeline—they have search—they're kind of winning in that regard.

I think the expectation is pretty high though. Um, this happens with OpenAI too, like, Sam Altman has teased GPT-5—the "death star" and all those memes were popping up. Then when it came out, there was a bit of confusion, like, "Oh, we expected more." So the fact that Gemini has been trending, pops up on my feed all the time, expectations are definitely there. I hope they deliver. I hope they crush it. I hope they overdeliver. Um, but yet to be seen. I'm excited for the next pod. It should definitely be out by then and we can kind of give our two cents and deep dive. And if there's anything I know about OpenAI, it's that they always fire back when Google drops something. So if Gemini comes out tomorrow, I wouldn't put it past OpenAI to put a model out in the next two weeks after that release to try to, you know, catch up or show face. So we'll have to see.

Yeah, cool. All right, should we, uh, wrap this one up with our bookmarks?

Yeah, let's do it. Uh, do you want to go first?

Yeah, yeah, I can. So mine, I'm excited about this one, um, because it is accounting and AI, which, you know, is just the sweet spot. Um, so basically, it is—there's a bunch of different articles, but the one I had bookmarked and wanted to kind of take another read through is from SeekingAlpha.com, which is like an investment, stock kind of website. Um, but basically it's about Michael Burry, who is the person that Christian Bale plays in *The Big Short*. Um, basically he has a hedge fund and he's exiting his trades. He's closing out his hedge fund. Um, because basically he thinks that there's an AI bubble here, which we've talked about before, you know, is AI in a bubble or not.

But why it's significant and why there's accounting involved is because they specifically call out depreciation expense. Now, for those that are maybe not familiar with what depreciation expense is, it's where you have an asset that you buy and you think it's going to last you three years. You would straight-line the expense of that asset over three years to reflect the use of it over that time. And what I think Michael Burry is saying is that with the AI chips, a lot of these companies' chips are being depreciated over a five- to six-year useful life cycle, which is pretty consistent with general IT equipment. But the point that he's making is that these chips really are only going to have a one-year cycle or less, at least less than five to six years. And so the cost that they're going to need to replace the chips, you know, as Nvidia releases more and more chips each year, they're going to need to replace those chips. But if they're only recognizing the depreciation expense over a five-year useful life, they are overstating their profit or understating their expenses because they're spreading that purchase over too long of a period, when really they know it's only going to last them for a year or two years.

So it's really interesting. You know, and it talks about all the billions of dollars being spent on capex, which is capital expenditures on these assets that hyperscalers are buying from Nvidia. And they even have a quote from the Meta CFO, I think from last year, talking about how there's going to be an acceleration in depreciation expense associated with all the spending that they're doing with AI. So it seems interesting. I think companies that are still depreciating on a five- to six-year useful life with some of these Nvidia chips, it's a good question. Like, is that really appropriate or should that be re-evaluated? Um, so yeah, that's something that's interesting with all the talk about hype and bubbles and all that.

Yeah, that's what I was thinking. My first thought was this bubble might come crashing down if people are making the wrong estimates there.

Yeah, yeah, interesting, right? So, yeah, I'm going to give it another read-through, but um, yeah, it could be something to watch out for as the year goes on.

Cool. My bookmark is from the Cursor team, as we talked about it. So Sasha Rush, who is a researcher at Cursor, gave, I think, a 20-minute presentation about how they trained the Cursor model. So I was taking a look at it earlier today. Pretty interesting, kind of talks about how they use their tab-completion data to power this model. And again, this is the one we were talking about earlier, Composer 1. Pretty fast. I would actually say really fast and pretty smart is how I'd describe it. Um, so it's kind of interesting to see how Cursor is moving from being the model harness, where they take in OpenAI, Claude, etc., to being the model creator, where they have a tab-completion model and a full-on Composer 1 chat model and agent model. Um, so if you're interested in how they did that, it's a 20-minute video from their team describing how they trained this model with their own proprietary data. So pretty cool.

Nice, nice. Um, one last bookmark, and then I promise we'll wrap it up, is I can't believe that Apple is releasing an iPhone sock. Have you seen that?

Oh, I did see that. Yeah, that was insane.

Unbelievable. $230. It's like $200. Yeah. Unbelievable. I just, look, I don't want to...

Is it on your wish list?

Don't ever get me that. If I see that sock in my Christmas presents, I'll be so mad.

I'll have to get it for you.

No, I mean it's unbelievable. I don't know what they're doing. I'm sorry. It's actually called the iPhone Pocket. So for folks listening who are interested, the iPhone Pocket. Yeah, and um, I'm looking at the article headlines, a lot of people are like, "Why, why is this happening? Why are we releasing this?" Um, so yeah. That's uh, you know, we've talked about Apple before and their strange lack of growth in some areas, and maybe they're too focused on socks, I don't know. But uh, in any case, yeah, interesting, but we'll see where that goes.

Awesome. Well, good stuff, Brad. We'll wrap it up there. And um, yeah, until next time.

Awesome. See you next time.

See you.

[OUTRO MUSIC]

Thank you for listening to the Breakeven Brothers podcast. If you enjoyed the episode, please leave us a five-star review on Spotify, Apple Podcasts, or wherever else you may be listening from. Also, be sure to subscribe to our show and YouTube channel so you never miss an episode. Thanks and take care.

All views and opinions expressed by Bradley and Bennett are solely their own and are not affiliated with any external parties.

[JINGLE]

Creators and Guests

Bennett Bernard
Host
Bennett Bernard
Mortgage Accounting & Finance at Zillow. Tweets about Mortgage Banking and random thoughts. My views are my own and have not been reviewed/approved by Zillow
Bradley Bernard
Host
Bradley Bernard
Coder, builder, mobile app developer, & aspiring creator. Software Engineer at @Snap working on the iOS app. Views expressed are my own.
Mastering Cursor: plan mode, multi agent, & Composer 1
Broadcast by