Building useful bots & open source AI

Download MP3

Bennett Bernard (00:11)
this is episode five of the Break Even Brothers podcast. I'm your host, Bennett Bernard, and my co -host, Bradley Bernard. How was your week

Bradley Bernard (00:20)
good had a bachelor party this weekend went to LA it was like a huge huge villa so I had like a tennis court basketball court a giant pool jacuzzi was ton of fun I was like 13 dudes and yeah fun fun activities first time I played sports like and tried really hard in a long time so that was a lot of fun got a good sweat in but got like a major sunburn had to you know make sure I'm okay after that but yeah it was fun nice weekend

basketball and pickleball. And so I feel like we did like the basketball camps growing up, like all the time. And I feel like after that, I never really played because I was mostly into coding and gaming. But then like going back to basketball, I'm always like, can I dribble? Can I shoot? Like, I feel like

You know, it's always different coming back after a long time and I wasn't terrible, which I was happy about. Like I wasn't awkward dribbling or anything, but definitely not as good as the old days. Not that I was any good, but you know, going through the drills and stuff, you're a little bit better.

Bennett Bernard (01:16)
Yeah, you did more of the Friday Night Lights stuff than the basketball stuff, I feel like. Yeah, I did the basketball stuff. That's cool. Yeah, no, my weekend was good. Pretty uneventful, I think, or I just can't remember much of it. Not for any crazy partying at all, but just, you know, just surviving out here in the desert heat. So about it. like 110, 108. Yeah, it's horrible.

Bradley Bernard (01:19)
Yeah,

What was the temp?

it was 90, I think in LA, which felt really warm, but I don't think I could do 110. It was like Beverly Hills area. So you kind of, from, from where I'm at, it's like an hour drive. And then you drive through a kind of like, not the most nice area. Then you kind of go straight up into the Hills and

Bennett Bernard (01:47)
I can't either. Where in LA was it

Bradley Bernard (02:03)
like Airbnb was at the top of the hill, so I this beautiful view. It was great. It definitely an expensive weekend, but it was fun.

Bennett Bernard (02:11)
Cool. So sounds like both had pretty good weekends so one thing I wanted to talk about was bots and There's a reason behind why I've been thinking about that. But before I get into my own reasons people should know and our audience should know that you have built many a bots and many different kinds of bots all doing different kinds of things

So I don't know if you'd consider yourself a bot expert, but when I see you do them or when I've heard you talk about them, I feel like you have a lot of excitement about building bots. guess tell me if that's wrong or right and kind of what your thoughts are. And if you can share some of the bots that you've built, it might be interesting for people to hear.

Bradley Bernard (02:45)
Mm hmm. Yeah, absolutely right. think optimizing and creating some sort

Pipeline to do something for me that I would do manually is a lot of fun. It's like fun on the technical side Fun on like the reward side. So you pulled off correctly It's definitely worth it. But I think stepping back a bit bots has like a few different connotations so you can think of like the negative connotation of Like a sneaker bot or like a ticket master boss something that's you know, beating the lines buying something and kind of Not doing something in a fair way There's other bots that you know are less

something that can automate an action that you do in your life. So one bot that I had built that was completely harmless was I was in New York, I think in 2022 for about a month. I was in Brooklyn.

And when I entered a gym that me and my housemates went to, there is this system that would count how many people were at the gym. And it blew my mind that I could figure out the accurate count of people in the gym. So I went to the front desk guy and I said, Hey, like I saw online, you have a people counter. like, know how that works in the front desk. I was like, no, I have no clue. Like, you know, we bought the software or whatever. And so what I did is I pulled up their website, which has this live person counter. I see it. I'm walking in and out of the

It's literally updating like as I close the door as I open the door I tell my friends like hey three people go walk through like look down so you know Maybe you won't be able to detect it and it got it right every single damn time and I was like, holy crap That's cool. And so the reason why I built a bot for this the bot I built was essentially tracking the gym population over time

Just to graph it, just to take a look at it, because when we're out there, it was a little bit crowded. It was a climbing gym had lifting as well. But since we were kind of like working from home and like on the weekends, we want to figure out when's the best time to go. So I probably wrote a bot in an hour built up a layer of application that essentially went to their website, took that live counter number, and then just stored it in a database like every minute. And so that probably ran for two days. Then I pulled in a simple charting library.

charted this over time and you could see like when the gym was popular, when it wasn't popular. And so that was a fun one because the technology that the gym uses super advanced. like went into their webpage, inspect element on Chrome, saw where they're pulling that data from. then wrote like simple code to do the same thing. They just had an API they were hitting and that returned the count. So I did the exact same thing for my web app, scraped it, put it in a chart, showed everyone in my house and said, Hey, here's the raw data. Like this is when we should go.

And that one was so harmless, but I don't know. had so much fun building it. I feel like it actually helped us because we went to the gym at a less crowded time. But yeah, there's plenty that I've made that are along similar lines or different lines. And it's so much fun. Like I really do enjoy it.

Bennett Bernard (05:36)
Yeah, that's cool. The I was going to ask you and then you kind of touched on it, but I wanted to double click on it a little bit is you looked at the website like the HTML elements and was that number it was a value like a static value that you grabbed or you said it was like an API that was updating that number on their website.

Bradley Bernard (05:56)
Yeah. So they had a webpage that webpage is making API requests to a different service. That service that they paid for that service was keeping an accurate count. So their page is just basically refreshing the API, like every minute or every like real time event. So all I had to do is look at their source code, see how they're making that request to the other service, emulate that on my bot. And I was off to the races and I think I live coded it in front of my housemates. I was like, here's my idea. Like, should I do it? And people were like, Hey, if you want, and like, I would do it

But it'd be nice if people like I really wanted that, you know, so was like, okay, you know I'll do it and I think I completed it in like an hour deployed it with Laravel Forge It was all set up and going and I remember everyone that I was looking at at the time was like wow Like you're like a super nerd. Like you really love this. I was like, yeah, I mean, this is so much fun Like this is I could do this all day. I mean if it paid the bills, that'd be great But usually it's some side project kind of thing that I'm hacking on

Bennett Bernard (06:28)
Mm -hmm,

Yeah. Yeah, that's cool. I'm surprised that you're able to request the AP hit the API that they're paying for. I guess I'm just surprised that is that normal? Like, that typically what you see in those kinds of services? Like, or is that a bit like you're surprised that they were you're able to hit

Bradley Bernard (07:06)
I was a little bit surprised, but if you can see it on the website and they're making requests to a different API to get it, there's a high chance that you can just emulate that. If I loaded the page and that count was already on there, I wouldn't know where it came from. And so I just have to reload the page and pull out that number. But since it was pretty transparent on how it was done, it was pretty easy. but yeah, for all the bots I've made, some are super difficult. Some require a lot of code. Some are very, very straightforward.

it really depends on the use case. And I think the more fun ones, it's a little technically challenging. So you like, you learn something along the way, but you also get that satisfaction of, it works. Like, you know, this is something that I could repeat and do again and code in Python code and Swift, like literally whatever it is, PHP, whatever you need to get the job done. Like that's super fun.

Bennett Bernard (07:54)
Yeah, that's cool. One of the bots that you've built was, and I'll kind of start it off, but then you can take it from there is at the height of the pandemic and everyone was getting a Peloton and there was like a waiting list or there was some kind of like just backlog and it was really difficult to get one, but you had to come up with a solution to get to the front of the line or it was to get like straight to customer service where you could like talk to someone to get the bike. I

Remind me exactly, because I remember using it and it worked. All the technical aspects were completely beyond my understanding. But remind me, I guess, what that was and how you pulled that off and what was that bot doing exactly.

Bradley Bernard (08:31)
Yeah. So I had ordered a Peloton probably like in the 2021 era, I think maybe in June, I looked online, like, how can I get a faster delivery date? Because people on Reddit said, Hey, if you reach out to customer support, some people cancel their delivery and you can kind of swoop in and get that bike delivery. so I had first started chatting their support probably like once a day saying, Hey, here's my order number. Here's my email. is there any openings today? And the usual response is no, but like, Hey, check back in later.

And so I thought, you know, along the lines of making a bot, like it'd be fun to chat the customer support regularly. Cause I imagine for these chat support tools, at least for some of mine, it shows you the chat history. So if you chat back again and again, you'll see it. But I was like, I wonder if they have that history in two, if they don't, it'd be great to just be like an automatic chatter. So I could open their live chat, log in as my email.

send them a message saying, Hey, I'm open for delivery in the next like two to three days. Do you have anything and see what they say? And so it started off like simply as that. what I did is I used a Python tool to automate controlling the browser. So it wasn't anything that I did like under the hood. wasn't mimicking any API requests. This was simply like boot up a browser, click on their like chat icon from there, like fill out all the inputs as if you're typing in text manually. and then I built out

essentially a series of questions and responses that people would ask me. So as I've been chatting the manual, kind of built up a repository of questions that if I said, hey, do have a delivery date the next few days? And they would say, no, but we have one then, or no, but check back later. And so all the responses that I got, I collected.

and basically created logic to walk throughout like a correct response each time. And so this is before the age of AI. So you can imagine a giant if else list of like, if the agent had these keywords in their response, respond with this. And it was super fun. think I deployed it after like a week or two, and I was just glued to my machine watching it auto chat. Every time I would chat people, it'd print out the log. So there'd be a giant log file that I said this, the agent said that.

And after each session, I take a look at it, update the code because maybe an agent said something different and then move forward. So it continuously got better. And I was sitting there watching it. I think in the height of it working, I was chatting every hour, which is absolutely absurd. And the conversation would maybe last 10 minutes. I think even sometimes I had an overlapping conversation where I had a chat and it takes so long because they kind of ping pong back and forth. I'd have another chat. and I was sitting there like, is this going to work?

And so the TLDR is that after running it for probably about a week, it did speed up my delivery. think I was quoted for a few months and I ultimately got it maybe in like two weeks. so I run the bot every hour of chat. look at the logs, see like the result and my delivery date was like continuously getting moved up because these slots opened. Then it got to a point where it was like, Hey, it's next week. And you know, that's good enough for me. So I canceled the bot, but then I had told people, was like, Hey, I built this cool thing, automatically chatted.

had enough logic to figure out what the agent was saying and respond appropriately because, you know, if they ask you what dates work for you and you're asking them,

You know, something completely garbage. They're going to be like, we don't understand that. Like, what are you saying? So it took a bit of time to figure out the right logic, but yeah, it was, was really fun. Another, like, you know, scratch my own itch. I want to get a faster delivery. How can you make that happen? Well, you could call in, you could chat them pretty much anything you can think of. But this way, once I've built out all the code, it was all autopilot. I literally just turned it on, had a cron job that ran this Python script that booted up the browser that shouted them and then had the

so I could take a look at it afterwards. But yeah, that was fun. I think I had a nice, nice challenge for that one because it's not my first time automating the browser, but it's always fun to do that. And I think things change over time too. So I'd like update what elements I was clicking on as the chat changed. But yeah, super, super, super fun.

Bennett Bernard (12:30)
Mm -hmm Yeah and useful too I think you brought up a good point about wanting to automate things that are that are boring or just you know you'd have to otherwise sit there and manually do and Certainly accounting and I think a lot of other jobs, you know not just accounting but people have things like that that they do and they say internally like this is so Mundane and monotonous and routine like, you know, if only I had a robot to do it and I think prior, you

non -technical people would have a lot of trouble implementing that prior to some of the advances that there is now with these AI models. still, think it does get to, I think naturally humans are, you could say they're lazy because they don't want to do, they want as much reward as possible for doing the least amount of work, generally speaking. And I think that's okay.

Also to you know people want to do things that give them value and I it will recognize when like there's a really mundane task like say Peloton you know talking to customer service like that's not a task like anyone wakes up and says I want to talk to customer service today so like automating that is you know on your behalf is just a cool thing to do and like I said there's so many things like within accounting that would be useful to that I'm not quite sure I'm sure there's a lot

different software providers and vendors that give accounting firms the ability to have bots almost or automate certain transaction matching. If you're doing bank feeds and you have to match transactions together, I'm sure there's software now where you can look at that and then guesstimate what matching you do based on historical data that you can provide.

It all on all it kind of goes back to that thing where it's that people want to be doing things that are meaningful and not manual and not meaningful. So yeah, bots are cool and it kind of has a little bit of a hacker ish feeling towards it, you

Bradley Bernard (14:27)
Yeah, yeah, one that I created recently, was for March Madness, I think, but it was like a Chipotle bot is what I called it. And essentially during the NBA games, I think if anyone made a perfect free throw, which was like a one for one, two for two or three for three, the Chipotle Twitter account would tweet

an image inside the image would be a code. Kind of think of like a captcha or something that's like the words are or the letters in the code are kind of malformed and jumbled just so it's a little bit harder.

for people to get the code. And so you had to immediately take that code, text it to a short telephone number, like an 888 -222 or something. Once he sent that code, I think the first 150 or 500 people or something would get a free Chipotle entree. And so I went to Portland and my friends were watching the game and he had pulled up his phone, his laptop and the TV screen. I said, hey man, what are you doing? Like, why do you have so many devices out? And he's like, well, I'm just watching.

because I want to get free Chipotle. And right then I was like, okay, now we're talking. There's a little bit of downtime like we're visiting them, but I was like, okay, I'm always up for the challenge. Like, what do we got here? So I looked at how he's doing it and I think he had like the Twitter feed on his desktop watching the game on the TV. And then he had his phone.

I'm like ready to text and I think he also had the NBA live feed of the game So the TV has a bit of a delay, but the live feed if you go online is much more up -to -date and so I was like immediately diving and I was like how quick can I make a bot because sometimes I get a little overconfident maybe 30 minutes an hour or whatever But this one gave me a run for my money because there is a few different steps You had to first detect the perfect free throw then you had to wait for Twitter for them to post the image

Then once that image is out, you had to send it to AI and say, extract the code that starts with free for free from this image, take that code and then pipe it through a text message. So there's a series of steps. I think that the first challenge was getting the timing right for detecting the event, because on the NBA live feed, a perfect free throw would happen. And this live feed was powered through a Jason API. So I was able to create a Mac app that you could click start.

type in the game ID that was on NBA .com, it would consistently pull in this feed. It would have to detect when a perfect through throw was done. Once that's done, it kicks off like, hey, we're on step two. But there's a bit of a delay. Obviously, TV happens quick, or TV happens slow, NBA is fast. So I had to configure a sliding delay of, do I wait five seconds, 30 seconds, 60 seconds? And while that was engaged, I'd be listening to the Twitter API.

And if you're not familiar, the Twitter API went over a crap ton of changes. Essentially it's nothing like it used to be. Like when Elon came in, he made it only paid. Like you can literally only do things that are useful if you pay Twitter money. So all these apps died essentially overnight. The pricing is outrageous. So what I had to do is do a bit of like reverse engineering on Twitter side to be able to get tweets from an account. So the end goal is to look at like Chipotle tweets as their handle, fetch their tweets like as fast as I can.

after this free throw happens and then the next new tweet that matches a certain format that's expecting the code, look at that tweet, take the image inside the tweet, pull that out, throw it to AI, get the code back and send it off on text. And so I had to find a bunch of Twitter tools that were online, like GitHub repositories. There's one that you could type in your Twitter account handle and get like an authentication token. Then you could throw that to the Twitter unofficial API and get back tweets. So I kind of. Crafted something together.

got that to work. Then the next step was integrating with AI. So I think I chose, Google's Gemini flash 1 .5, I think model. And so that one was multimodal. So you can send an image to it and a query and it would do something. So I found the tweet, took the image, sent it to Google. said, Hey, Google, like, give me the code in this image. It returned me an answer and it wasn't great all the time. So I had to perform more like text manipulation on top of

extracted the code and then now what do I do with that? Like, how do I get a text message automatically fired off? Well, it turns out on Mac machines, there's a thing called Apple script and Apple script is a scripting language that Apple created that you can actually essentially configure and control applications on your Mac. And so there, I was Googling around, like, how do I automatically send a text message and I landed on this Apple script and the Apple script essentially says like, Hey, wake up the iMessage

set the recipient equal to Chipotle, then set the text message body equal to this, and then click send. And so I had essentially configured this whole pipeline, wait for the event, wait, go fetch their Twitter, find the latest tweet, take it to Google. After that's done, send it off to Chipotle, then mark it as like a sent text message. And when I first started building it, it was a headache of getting the timing right. I'd run into Twitter API limits. I'd be fetching too often and Twitter would say, Hey, hold up. Like you're, you're fetching too often.

So getting that timing right of like 30 to 40 seconds after it happened was key. And also fine tuning my AI query because half the time Google would return something wrong or something too long. And I think the code is exactly like 15 characters. So I take the response, make sure it's 15 and then send it off. And it was so much work. I think I spent maybe like 10 hours building this app when I probably could have sat there done what my friend did, like waited for it to happen, sent off the code, but it was like, it was like a challenge. Like how far can I get? What?

Bennett Bernard (20:01)
Heh.

Bradley Bernard (20:09)
roadblocks am I going to run into them? What am I going to learn along the way? And like learned Apple script, learned about Twitter is like not unofficial API, but like kind of the API that runs twitter .com under the hood. So taking a look at how they actually perform loading tweets, doing all that, and then getting better at AI, like AI is everywhere now. It's great to be able to perform like an AI image kind of processing task. And then yeah, just like, if you can coordinate all that together, it might sound simple, but the timing was how to be as fast as possible. Like everything had to

queued up and be perfect. And so I built that. It failed like a ton of times, adjusted it. And then finally, near the end, like I think it was near the end of the series. It wasn't March Madness, it was the finals, I think. Near the end of the series, I had got it in its perfection. So I would literally, the game started, I click start, I'd sit back and I'd say, okay, I am confident this is gonna work. And the second the first, you know, perfect free throw came in, boom, like I'd watch my

And so it had like three pipelines of a Mac app. The first one would be like, Hey, a highlighted event, that perfect free throw boom. Then you see this like loading spinner on fetching tweets. And then you see the tweet pop up and then I take that and pass that to AI. Then you see another loading spinner, like, you know, processing AI and then boom, the third column was a sent text message to Chipotle with the code. then Chipotle would text you back. If you got it. They'd say like off the rim, I think if you were too late or you had the wrong code or they'd text you back a code.

which was you could redeem online and get the, the entree. And so, yeah, first time got it and it worked. And I was sitting there like, this is so cool. Then after that, I just kept it running. Cause I was like, yeah, I spent so much time. I don't want to just like pack my bags and go. I think I got maybe four or five codes after that where the free throws made, I watched the whole pipeline and it's full beauty. And I got like three or four more codes, but if you get it right the second time, they send you back the same code you've got. So like, I didn't get any more.

Bennett Bernard (21:48)
Mm -hmm.

Bradley Bernard (22:03)
But it was just a validation in my head of like, you know, I didn't get lucky. Like it wasn't a one -off. Like I was scoring big time on that whole pipeline. And if you looked on Chipotle tweets, you see all these replies that are like, you know, screw bots, screw bots. Like I can't get this fast enough. I'm sitting there like, yeah, I have a bot, but I'm not like doing anything. I would say too malicious. Like I spent a lot of time and effort, you know, on it. So it was more of a technical challenge, but I'm sitting there looking at people's responses. I'm like, yeah, there's probably a few others that

Bennett Bernard (22:23)
Yeah.

Bradley Bernard (22:31)
spending time botting it, you know, like me. But yeah, that's kind of the story that Chipotle won and also a lot of fun, a lot of work too.

Bennett Bernard (22:39)
Yeah. Yeah, that's cool. You know, it's funny because the like, reminds me almost of like quantitative trading or like, you know, the people in the stock market, like write these programs that like auto trade. I'm certainly not an expert in any of that, but like, you know, no of it. And, know, it's is that like technically like by the rules it is. But like there's a lot of people object to it because it's like can happen in a millisecond. They're booking and making all these trades.

But yeah, that's pretty cool that got some free food out of the Chipotle box because no food is as good as free food, if ask me. Free food is the best tasting food around, so that's cool.

Bradley Bernard (23:14)
Yeah, absolutely. And I think I packaged my app up and sent it to my friend. the only thing I needed, it was like full disc access, which is a bit special on Mac because I had to run like this Apple script, this Apple script touched your iMessage and like, essentially it just needed to be like a little bit loose on security, but you know, it's a friend sending an app. Like I'm not trying to hack his computer. was like, Hey, if you want it, here it is. Like I'm proud of it. so yeah, it was cool to be able to create something that was like reusable.

integrated with your own messages and work pretty well.

Bennett Bernard (23:48)
That Apple script is pretty interesting. I never heard of that until you mentioned it, that you can basically code your own computer to do things that are like native to that's, that's, pretty cool. Cause I think, I know that Microsoft just recently, I found this out and I didn't see any news release. I just saw it on my Excel where you can like write in Excel, like TypeScript. And I was like, that's really random. And I was trying to think about

Bradley Bernard (23:55)
Mm -hmm.

Bennett Bernard (24:14)
potential use cases and they have some examples of like, you know, this is a TypeScript code that does this, but it wasn't overly impressive. And in my head, I was thinking like, if you knew TypeScript, which is like a general programming language, right? Like that's like a flavor of, yeah, it's like a flavor of JavaScript. I want to say that is right. Yeah. So, you know, in my head though, I was thinking like, if you are, if you have the knowledge to

Bradley Bernard (24:29)
Mm -hmm, very popular.

Yeah, JavaScript with types, so like the better version.

Bennett Bernard (24:43)
TypeScript like functionally Why would you feel the need to do a lot of things in Excel in the first place? It's kind of where I was like I'm not sure where that benefit would be just because in my opinion at least this is for accounting maybe there's other use cases, but you know for accounting It's I think a lot of the the push should be trying to get out of Excel as much as possible And so like why would you learn TypeScript just to use it in Excel when you could be learning? Literally anything else that probably gives

more benefit just more broadly than you know learning TypeScript just for Excel but it's interesting that AppleScript has

Bradley Bernard (25:15)
Yeah. Yeah. Maybe the automation factor comes into play of, you know, some software engineer comes and is trying to fulfill a request from automating some Excel work and

TypeScript, think is one of the most popular languages right now. JavaScript alone is extremely popular, but TypeScript is definitely succeeding in being the thing to know across like backend, mobile, web, front end, et cetera. So yeah, maybe they have gone on a mission to say like, we're going to target the most popular language. We're going to enable some sort of automation layer. So that if people do, or like moving things across files or working with external data systems, whatnot, they have some sort of interface to do that.

But yeah, mean, I think moving things off Excel sounds great, but maybe just working within the confines and maybe that's been a tall ask for like the Excel team of like, we want more. How can we do that? And like they want to sell Excel. So, you know, they're not really going to say, Hey, get off Excel, but here's like a language and a tool that you could do that. but yeah, I haven't looked into it and Apple script is cool because apps can provide.

I kind of a way in. I think, like I mentioned for the messages app on Mac, there's a way to send a message and like for the mail app, you can send an email. And I think there's a tool when you open up like Apple script to kind of target an application and like see what, you know, levers you can pull to enable functionality within the app that Apple's fully developed. But it's a little bit of like a secret language. It's like automating.

Mac apps where if I didn't have this, you can imagine I'm trying to like open up the messages window, like click on new message, like type in Chipotle and the, and the kind of address bar, but all of that is kind of brittle and requires much more effort. And so for Apple script, I can copy in like 15 lines of code to say,

activate the messages app, set the recipients, set the body, click send, and it's all built in because the messages app exposes that. So I don't think I'll do too much with Apple script in the future. that use case lined up absolutely perfectly, but it is cool language and feature of the Apple platform to know.

Bennett Bernard (27:24)
Yeah. Yeah, that's cool. The, remember one bot idea I had a while ago, I think it was during the 2016 election. So, you know, that was Trump versus Clinton. And it wasn't because I was, you know, following the politics much, but it was because I was like, that's when I was really learning a bunch of programming. So it was, you know, that was when I was at loan Depot, you know, really learning how to use Python and SQL. And, you know, back then that was when the Twitter API was, you know, how it

when you were describing it before pre all the new exchanges. And I had an idea and this was, wasn't, you and I still probably would not be able to make a bot, you know, in the current state of my skillset. But, you know, I remember learning and I was thinking about as an idea, basically looking at like trending topics, like in politics on Twitter and like trying to find a way of like trying to get like

like a phrase or like a viral event that was happening and like basically like print it on like a t -shirt like all automatically. like say there was that really infamous Trump tweet where it was like, cofeefe or whatever, like you misspelled something. It was C O V F E F E. And you know, like just the idea would be that you just like take something like that, like put it on a shirt and you can like put, yeah, yeah. Well, like, and just like have it be like not an actual printed shirt and no inventory.

Bradley Bernard (28:42)
straight to the press.

Bennett Bernard (28:48)
but just like have it be posted. And then if people start buying it, it would be like the just in time inventory where like it would then screen print and send it over to them. Now, again, probably didn't have the technical chops to pull that off. That was an interesting idea. But then the other part of me too is like, I don't want to like foment political discourse by like, you know, having much a bunch of shirts that like, you know, print lock her up and then another bunch of shirts that like, you know, say lock him up, you know, I like, I don't want to be involved in that business at all. Like I just a moral issue, I guess. But yeah, it was interesting.

Bradley Bernard (28:53)
Mm -hmm. Mm -hmm.

Yeah, yeah.

Bennett Bernard (29:17)
playing around with the Twitter API in that time too, because that was when I think it was much easier. And I don't know if it's easy now, but much more open than what it seems to be now these days.

Bradley Bernard (29:27)
Yeah. And I think there's another category of bots that are absolutely businesses too. Like as an indie developer over the past 12 months, releasing an app on the iOS app store, I care about where Chatty Butler ranks on the app store. So I paid for a service called app figures. And what they do is they're scraping the app store for certain keywords. So for Chatty Butler, I'd want to rank for AI agent or AI chats.

So I'd go to app figures, the service say, here's my app. Here's the keywords. Can you please track those like every hour, every day, both for us, for Germany, for iPhone and iPad. And so when I pay for those services for me as an engineer, especially as a bot creator, I'm thinking.

Man, I would love to have a business like that where my core logic is spinning up servers that are just consistently scraping the API. So that involves doing a bit of reverse engineering on Apple's devices and hardware, automating that, and then doing that at scale. And so there's tons of companies that are literally scraping data, repackaging, whether that's like cleaning with AI.

augmenting with other data sources. Like there's so many possibilities. And then selling it for a subscription. so I had used that probably for a few months, but like decided to cancel cause I wasn't very on top of like app store optimization, but there's a company called, I think bright data and they have like a full suite of scraping tools and data sets. So can pre purchase like Instagram accounts that have over a thousand followers and yada, yada, yada. And so it's.

pretty interesting to be in a space where there's a lot of business opportunities. So the bots that I created, yeah, scratch your own itch, kind of fun and like throw away bots, but if you push it a step further and kind of crack the secret sauce on reverse engineering some of these things, some can be really hard. Like Apple's definitely a little bit on the more difficult side.

Some can be a bit easier. It's you like taking your software expertise and your kind of security and pen testing and pushing that to the limits and putting that all together. But if you can crack that, I think you could really have an awesome business idea of just automating that and then selling it. Like it's on my to -do list to have a B2B app that just has some massive scraping engine on some service that people value. And then just sell it, like create a few charts, create a few dashboards,

It doesn't sound too hard to me and I haven't gone through it yet, but every time I had an invoice from app figures, it was kind of that dread of like, man, I should be in this business myself because one, it'd be super fulfilling and fun to reverse engineer and two, tons of people are paying for it. It's literally a profitable business with I'm sure millions in annual revenue. So it's like, how do you make your bot much more profitable and useful than just these one

things because there's so much out there that people will pay for.

Bennett Bernard (32:23)
Yeah, before we move on from bots, one thing that you mentioned with all your bots that I thought was interesting is you talked about using Laravel to deploy these and like using Laravel Forge. I guess from my perspective, I'm, you know, as you know, I'm not a Laravel person. What makes that like, do you have like a routine flow of like, I want to make a bot, so I'm going to spin up Laravel. I'm going to like configure the back end. I guess, why do you choose?

like to go, it just seems like such a comprehensive framework for like a bot. When I think of a bot, think of like a limited, like there's no UI, probably it's all just running like on a service. guess just talking you through why you're such a big fan. I know you're a big fan of Laravel, but like why you think that for bots, like what's your normal flow and like why use Laravel for that as

Bradley Bernard (33:10)
Yeah, usually bots is just hitting an API with a certain configured parameters, headers and payload. And so you could choose a lot of different languages, Python, Swift, TypeScript, like JavaScript, whatever.

For me, it's familiarity and comfortability and speed. So if I know I need to mimic this API request, like the essentials of a bot is do something and store it in the database or store it in a log file. And Laravel provides you so much power to have like a queue worker, a database, like a front end without much effort. And so you could start off with the skeleton of a bot that fetches data and inserts into a database and you don't even have a front end. You just open up

your SQL editor and take a look at the raw data in your tables. And depending on what you need, that could be good enough. But I feel like it provides such an excellent foundation and the amount of like very deep tools that Laravel provides, like for auto scaling, a Q worker or like managing Q worker. So for a Q worker, you can think of like a separate entity waiting to run work. And so if I had, for example, an app store scraper,

Say if I had five customers on my fake App Store scraper and they wanted to check 20 different keywords. Well, in Laravel, what I could do is I say, every hour we want to fetch the rankings for all these keywords. Once I go through each keyword, I can kick off a queue job to this queue runner.

And there's excellent tools that Laravel provides that can scale this for me. So if I have five customers, I have five queued jobs, but it depends on how many queue runners you have. So if you have two queue runners, it can only run two things at one time. But Laravel is excellent tooling that says we can scale these queues up and down dynamically at runtime, depending on the queue load and other factors. so like pulling all these packages in for free that I've used before that make a lot of sense and aren't a headache, it makes Laravel like

excellent choice to at least start out. I think as you develop more complex bots, you have to make sure you use the right technology for the Chipotle one. You could have a Laravel version that again, scrape the NBA feed, scrape Twitter API, but you need to send that text message and there's zero chance that a server online can send that text message for you. So that's be tied to your device. So like a different approach I could have taken there was do all the processing on the backend, send like a push notification to my device on

Mac, and then that would then trigger that Apple script to send the text. And that would be great because I could like interface with multiple computers. I could send it to 50 people and that processing is only done once on the server. Then 50 people send the text message and like that would be doing it at scale. But essentially I did it all on the computer for the Chipotle bot, but for a lot of the other ones, it's collecting and harvesting data using an API request. PHP works. I wouldn't say it's like the best language or the fastest, but the tooling and the infrastructure

Laravel provides us both free and paid because it's not all free. works well with my kind of thought process and thought model. think PHP is very approachable for like building a web app, building a scraper. So yeah, hopefully that answers the question, but I'm a bit of a long -term PHP fan to say the least. I think they know to build something really quick and understanding what pieces you need to figure out and get out the door quickly. That's like how I see a successful box. I don't want to be stumbling through a new framework.

I know exactly what I need to do, which is how do I do it the fastest and how do I make it scalable?

Bennett Bernard (36:40)
Yeah, that makes sense. And just to be clear, we're not sponsored by Laravel at all, but Brad's working on that as we speak. So yeah, just kidding. Cool. One other thing. Sorry. I keep saying one more thing, but you mentioned another thing that was, I think, important. And it kind of got me thinking about, again, as an accounting person, like the state of like the future for my industry. And I feel like bots.

Bradley Bernard (36:45)
Not yet. Yeah. I'll talk to Taylor.

Bennett Bernard (37:05)
had that bad rap, you know, the kind of felt hacker ish, you you think about like the election bots and like how they, you know, were fomenting discourse on Twitter and stuff like that. But I do feel like as, you know, they become more approachable, which I think like AI and you know, all the advances there are, are making them hopefully feel more approachable. I feel like it's gonna be that second wave of like offshoring task, right? Cause at first, you know, prior to offshoring, you know, all the work was done like within your team locally for the most

And then as like, you know, technology advanced and able to work online with the internet company started offshoring to like the Philippines and to Mexico and to India and those places that were a bit more cost friendly to kind of maintain profits and things like that. I do feel like bots, you know, might be that second or like third coming of like that movement to like reduce costs and be more broadly accepted. So I'll be curious to see, you know,

that progresses like in my own industry. Cause I think a couple of episodes we talked about an article from someone in the accounting profession, Jason Stats. And it was about the first camera, like a hundred million or first $1 million one person business, something like that. And I think, you know, you'd have to rely on like bots to still do all of that kind of mundane routine monotonous work. But you know, that might not be so far off these days. Again, with just how fast things are progressing.

Be curious to watch that and see where that goes. But as you were talking, it made me think about the comparison of bots with like offshoring. You know, when it was, that was really popping off years ago.

Bradley Bernard (38:38)
Yeah. It's, it's a little bit like we've been seeing a shift in with the new AI world being, it used to be software as a service. So can think of a pre -configured website that you could do, you know, four or five tasks and they're very, very well defined. Now with AI being more autonomous agent have capabilities. Now I think it's coined as, service as a software, kind of like flipping the narrative a little bit where

This AI tool is doing a whole lot more in your process and is able to like function as like almost like a human. And when you're using the software, like you're the human driving the software now in the AI world, it's like this AI agent is involved in your process and is doing all these tasks for you. And so there's a few companies out there that do that. Well, I think one's called gum loop, essentially like an AI builder workflow. So you can imagine a drag and drop.

interface a little bit complicated, but you can connect this data source to that one, create these automations that your company can run. and it's like using your own data. So you connect like your Google account, these other accounts. And from that you can create pretty complex, AI automations or AI agents. So that's kind of the future. I think with that meta just released, llama 3 .1, which was a huge, huge deal. this is.

a model that's been a long time coming. So Meta first released on the three, I don't know what it feels like maybe a few months ago, but the AI landscape moves so fast. It's hard to even remember with all these companies one up in each other, but they released.

Lama three, I think it was seven B and 70 B and you can think of the seven B and 70 B as the intelligence level with higher intelligence, it's slower and more expensive. a seven B is not as smart, faster to run.

but you know, good enough. then 70B is slower to run, but smarter. And so this time around, actually came out with their esteemed, I think it was 405B, so 405 billion. This one's very expensive to run, but extremely smart. So it's expensive, it's slow, but it's smart. And so with this release, think Meta went full open source, like described exactly how they trained

Described exactly what it should be used for how people can replicate it and once these open models are out there There's all these genius folks that take a look at it like fine -tune it to their use case like improve on what metas done there's so many different methods and like secrets in the AI model space that as Meta puts out such a high -class model like we can only wait and see over the next weeks and months How much better that one's gonna get?

so it's pretty crazy. think like Zuckerberg's approaches, let's make something powerful and free and see what happens. the community kind of went nuts, you know, overnight of, wow, this is out. I can't wait to build with it. And just thinking about meta releasing this, it's clearly upping the ante. I bet open AI is kind of shivering right now and being like, okay, when's, when's our model coming out? Because that's literally neck and neck Google meta, anthropic, like it's, it's never ending. I think intelligence is racing towards absolute zero.

At that point, I'm not really sure how the world's gonna react because I'm not really ready for it. I don't think a lot of people really grasp having an extremely smart, capable person, bot, agent, whatever you wanna call it, at your fingertips 24 -7, could have thousands of them too. You could have a company with a thousand people that are all AI bots. It's super odd to think about.

Bennett Bernard (42:14)
Yeah. Yeah. It feels like the matrix a little bit, you know, that whole movie plot, like we're going down that road. But, one thing that is interesting and yeah, the open source, you know, approach I think is the one just from my own humble opinion. I think is the one that'll be here to last a little bit longer. And I think we've talked before about, you know, how to know when to choose different models and like data security that comes with those, like open AI isn't really open. It's kind of a weird.

Bradley Bernard (42:18)
Yeah. Yeah.

Bennett Bernard (42:44)
you know, oxymoron, because I don't think they're very open in terms of like how the whole process works. And, you know, it's interesting to see with Lama, like I do think that's the future where businesses will want to use these models more. And I think from, you know, from my own understanding, using Lama, you know, using an open model gives them a bit more transparency and control over the inputs and how it got to those kinds of results or answers.

But I did want to take one, I guess, slight right turn on this conversation because one of the things that I've been interested in personally is, okay, Lama's here. It's great. But like, how does someone like actually run it? Like give me some practical steps and like what's involved in like running Lama. Cause you said it was really expensive, but then it's like open source. So guess to the layman, you know, explain that piece, like how is it expensive and how would someone go about, you know, setting that up on their own local machine?

Bradley Bernard (43:42)
Yeah, so there's a few different flavors. There's these companies that are building out GPU infrastructure in the cloud. So you can think of Grok, which I talked about last time, but essentially taking these open source models, putting it on their fleet of GPUs from Nvidia or other like advanced ships.

And then running inference on it. so there's two steps in the AI pipeline. One is training. So taking all these GPUs and having go through all this training and have like an output of weights and weights is kind of like the bread and butter of the AI model.

Once that's done, meta releases like the weights, then these companies take these weights, take all the code and then put it on their GPUs. And now the GPU is ready for inference. And so inference is the traditional sense that we talk about where we ask a question and we get a response. And so it uses all that training under the hood to then power those responses. And so there's, again, there's a few different takes. There's as a developer, you can go to any of, you know, five, 10, 15, 20 is what it feels like platforms these days that

have their GPUs, have infrastructure to scale, and have an API that you can hit that will access llama3, llama3 .1, llama3 .1 .4 .5b, all that as a...

engineer or someone who wants to tinker with it a Whole different approach if you want on your local machine on your Mac I think there's a tool called llama like maybe it stands for open source llama. I'm not really sure I haven't used it, but I've seen a lot of it on Twitter I believe the gist of it is that you download this Mac application it provides you an easy way to download the model in a way that if you're not an engineer or maybe you

It's kind of hidden behind the scenes. You click download, it appears on your machine, and then you're presented with a familiar chat interface so that you can then test out the model. And again, as we talk about the different sizes and the different classes of the, of the llama 3 .1 model, there's like seven B so small, fast, not as smart. There's like 90 there. And then there's the big one, four or five. And so if you take that four or five one, try to run that on a Mac book pro with, you know, 96 gigs of Ram. It's still going to chug through like

the response that you'll see, if you ask a question, they call it tokens per second. And you can imagine that as how many characters are being printed out in the way that it answers. And so if you ran four or five B, you'd get high intelligence, but it would take a long time, maybe a minute or two to generate a simple response. So that's why all these companies are deploying their GPU clusters. there's a whole nother kind of advancement that...

Again, I'm not super into AI and ML. I'm more of a product builder, but I'll give you the layman's kind of description is they're trying to take these models and kind of downsize them. So downsize them, but keep the intelligence level. So we'll take something

Lama 7B and like quantize it, which what I believe means is like take, you know, 90 % of the intelligence and make it smaller and making it smaller means it runs faster and cheaper on computers. So therefore you get the benefit of it, but it does lose a bit of the precision. So it's how far can you take that such that it's not a crappy model anymore. So like how, how much can we downscale it so that it's able to run on consumer hardware, not giant

So yeah, it's a little bit of a process and it depends on what model size you choose, but there is a decently large community and tooling if you want to check it out on Olamo that you can download it, get your model set up, choose a small one because you don't want to wait forever, and then just kind of tinker with it and see which ones you like.

Bennett Bernard (47:12)
That quantizing that's like file compression basically like in a different terminology.

Bradley Bernard (47:17)
I think so. I I know there's a whole lot that goes into it that is not really my wheelhouse, but yeah, you can imagine like compressing and making things smaller for the sake of trying to keep performance, but mostly to make it easier to run on consumer grade hardware. So you don't need to spend $5 ,000 for a giant, you know, Nvidia chip. You can run it on, you know, maybe $500 CPU with, you know, 32 gigs of Ram.

Bennett Bernard (47:21)
Yeah,

Yeah. That compression reminds me, I think there's a company called a Pied Piper that was doing that or something like that. The compression. Yeah. Something, something happened. Yeah. Well, it's interesting with llama. And one of the things that I think is important for companies looking to build, you know, on platforms is like, there's not as much a vendor lock in. I think there's an article that I will put in the show notes. One that I wanted to share.

Bradley Bernard (47:49)
I think they went under.

Bennett Bernard (48:08)
Is it's titled why metas 3 .1 is a boon for enterprises in a Bain for other LLM vendors. It's from info world, but basically in the article talks about the release that we talked about the model 3 .1. But you know, one of the things that it addressed, which I, which I was like, yeah, absolutely. It makes perfect sense is the ability to have something that is you're not like paying for a service or like a subscription and they can just shut off the keys at any time. You can set it up and run it on your local machines. Or like you said, you can do all

Bradley Bernard (48:32)
Mm -hmm. Yeah.

Bennett Bernard (48:38)
GPU cluster magic that they do, but like that ability to like build on something that feels more durable within your control, I think is huge. And again, especially from, from my lens in dealing with financial data, think we talked like a couple of episodes ago, like financial data is probably like second only to like health data in terms of like what kind of data is confidential. And so being able to feel like that is in a controlled environment, that's not subject to, you

open AI or Anthropic or whoever else just turning it off or getting acquired. there's, I think that is something that's going be interesting to see too. And what, what, if I was to start building an AI product now and the accounting world, I would assume that like going llama is like the best approach just based on that alone. But it's interesting to see that get called out in the article too, because that does feel like it's a very, you know, important thing for, for companies to be considering, you know,

What does this mean long term? Can I feel comfortable building on this?

Bradley Bernard (49:36)
Yeah, it's usually, you you choose the best model. And again, at the frequency of releases that these models are coming out, both closed source, which you can think of open AI, oddly enough, then open source on meta side. it's, know, it's every month things change. And at the end of the day, it's pricing and availability. So meta comes out with their new model. It's like open source available for commercial use. then boom, like overnight, the expectations and standards are changed. And I do feel bad for a little bit.

for all the people that are doing the closed source because as some of these companies aren't as good as like open AI or other ones, I'm sure they're spending, you know, 500 million or 200 million. I'm not really sure, but spending a lot of money trying to train a new model. if meta just releases one overnight without much, you know, discussion, that's already better than what you're aiming at. That all that money that you spent is kind of worthless. So I think as the intelligence level raises in these general models, as open source gets better,

I imagine that people with not as much money are going to back out and say, like we can't compete with Meta. Like there's no way like they have a giant GPU cluster. have tons and tons of data, tons of engineers, tons of, know, ML resource scientists. It's not going to in our best interest. And so I think we'll converge to be the open source and the larger companies. then the closed source companies who can still compete. there's lots of talented folks at open AI that, know, they started the craze. They were leading it, you know, or you can

they are, it's definitely neck and neck between like Google, Anthropic, Meta and OpenAI. But it's something that I think when AI first started, people were like, I want to get into creating my own models. And personally for me, I was never interested because it takes so much research and a whole different expertise. was, how can I build cool applications on it? And now being, you know, a year later, it's, I think it's even more important to continue to be on the building products with it side, because that race to

extreme high intelligence with extreme low costs. It's not exciting and it's something that these companies are going to do anyways. So trying to beat that, I think it is a fool's game. think someone from Meta who's leading a bit of the AI stuff had kind of reached there, said something on Twitter that, if you're building models, like just be careful because these large companies have the money, they have the resources and they're going to call it some good stuff. So if I were you, I would think of the next product you can build.

with an even smarter model because that model is coming, it's not a, if it's just a win, so.

Bennett Bernard (52:06)
Yeah. Well, in, that article too, that I mentioned just a second ago, they talk about like rival LLM providers and like what the llama release means. And there's actually a, a person, like an expert, I am not familiar with their name. I think it's in the article, but basically there's a quote that talks about like these companies called Coheer and Aleph Alpha. I guess those are some of the closed source models that are out there that are maybe not as well known

open AI and like that. But basically the person kind of calls out the like, Hey, they probably not going to survive this as this keeps going because they don't have the budgets and the researchers and all that. But they do say or they'll survive in a much smaller niche. like that's something that I think is interesting too, because, you know, I think that the term AGI where it's like artificial general intelligence, right? Like that's the kind of like milestone that people talk about hitting, but that's general intelligence

Bradley Bernard (52:40)
Yeah.

Bennett Bernard (53:02)
You know, I'm certainly not a technical person, but in an industry that's really niche, for example, like let's just take, I'll just take mortgage because that's where I have background in and have some familiarity with. There's so many different rules and regulations. There's so many different parties. It's a really complicated transaction. You know, I think like 80 % of people surveyed saying that buying a house is one of their most stressful endeavors. And so

There's all that knowledge that is part of that specific domain. And I don't think anyone would consider being an expert in those areas as like general intelligence. And so I do wonder, like if there will eventually be spin offs of like specialized LLMs for like these different purposes, or if people will just, you know, fine tune their llama model to be specific for their niche. then that just serves as like, this is our internal resource. It'll be interesting to see how that all kind of plays out for sure.

Bradley Bernard (53:57)
Yeah, there is one called Harvey. I think it's a legal variant of GPT -3 or 3 .5, but they were pretty early and they essentially trained on a bunch of legal documents because they wanted like a lawyer or counterpart or something along those lines. And I think with meta releasing such an intelligent base model, it makes those fine tunes even better. So fine tuning, can think of, you know, just shoving all this knowledge that's particular to one domain

the AI agent or AI model is still good at what, you know, general intelligence, but has this, you know, refined focus in a certain area and getting a good fine tune, how much that costs, how much that saves you. Because if you don't fine tune, you can do other prompting techniques

inserting examples of good questions and answers. So you could have like a 5 ,000 character prompt that says, hey, if you're answering legal questions, here's a few good examples of answers that we like. Now answer this question. So it takes a look at what you've sent, tries to mimic it, but fine tuning what it does is provide all that knowledge of like good question answering.

in one stage, then the model gets retrained a little bit, like adjusting a little bit of its intelligence to be pointed more that direction. And then now you don't need to have a large prompt. It already knows what's a good answer and what's in that domain. So I think it's gonna be really popular. I wish I knew more to do a fine tune because it feels like a hackerish thing of, I took Lama 3 .1. I'm gonna make it an expert Laravel programmer. So I'm gonna feed it all the Laravel docs. I'm gonna feed it all this, all the PHP docs. And then after that, I could

ask it any question, could do my Laravel projects for me. And I think that is super cool, but it costs a lot of money to fine tune. It's similar to training. But a lot of these providers are now accepting that. So I think OpenAI literally like two days ago was saying, come fine tune GPT -4 .0, their latest model. And then Google, think, in the next week or two is going to unlock fine tuning for Gemini Flash 1 .5.

And so that's like a whole unlock where I think people are waiting and saying these bass models are so good. I want to go a little bit further in this direction. How do I do that? And this fine tune will unlock that. Then you can basically compare like here's the base model. Here's my fine tune model. Give them the same question. See how they perform and iterate on that.

Bennett Bernard (56:20)
Yeah, that's cool. I plan on a couple of episodes. We talked about a use case. was trying to do with Amazon Alexa and basically using creating an Alexa skill that would read off a like game guide for like the Hello Kitty Island adventures game and then be able to siphon back, you know the answer from that guide and at the time I was having trouble because I was trying to use chat GPT's API.

And I was just lightly tinkering with it. But what I want to do since the release of Llama is actually install the 70B and use that Ollama tool to try and do that again. Because it would be hosted on my machine. Try and have it do that multimodal where it can manage multiple windows, I guess. And also do the rag.

You talked about that one time. It was like retrieval, whatever that was. Yeah. So, you know, I want to actually implement that now and be really curious to see what that process is like building on it. And hopefully I can get it to a place where it works. And then I can also compare that with I've been using Anthropix, know, Claude a bit more than chat GPT these days and using those like artifacts where like it tells you like how it did it a bit more.

Bradley Bernard (57:11)
Yeah,

So it's excellent.

Bennett Bernard (57:33)
So I want to kind of compare and contrast, if I try and build the same skill with like, you know, Claude versus, you know, llama that's running locally, and just, you know, how that feels and how the answers are. So I'm going to start doing that because it's having the builders itch a little bit right now. And I think it'll be a fun little project. And if I can get it, if I can get it posted, it'll be, I'll be excited because I could tell my kids, Hey, this is my, this is a skill that, you know, I'll say we made it together and you can ask Alexa anything about Hello Kitty and she'll.

Bradley Bernard (57:49)
Yeah, I feel that.

Bennett Bernard (58:01)
know, she'll provide all the answers you need so and dad can sit back and be like instead of having to Google everything every time they ask me something I can just sit back and be like this ask Alexa she knows yeah, yeah exactly. So be interesting. So I'll keep us updated You know, I don't know to the extent of when I can work on it But as I make work on it, I'll make sure I'll bring some notes back to this podcast. It'd be kind of fun to do some updates updates on that kind of cool

Bradley Bernard (58:09)
Ask Alexa.

Mm -hmm. Yeah. I think if you're using AI tools now, just a quick shout out, Claude is definitely the best and they have a free version. So if you go to Claude .ai, you're presented with their latest model, which I think is 3 .5 Sonnet, ranks really high across a ton of different benchmarks. And then yeah, like Ben mentioned, it has this artifact view. So you have a main chat, you ask it a question.

If it needs to write code, if it needs to produce some other resource, it does a split pane where on the right side will be an artifact. So it kind of gives you more context, more understanding of what it's doing. And you probably get, I don't know, 10 messages a day for free. I've paid for like the premium version. So I get more and it's absolutely excellent to write code, to reason about, to improve like literally anything. It's super, super smart and accurate. But there is a decent learning curve to prompting AI. It's like

never -ending game of understanding which model you're talking to, how it works best. But I think Claude in the general sense is absolutely worth $20 a month. hey, maybe next week OpenAI drops GPT -5 and next week on the pod, I'll say switch to GPT -5. But in the current stand, of all things, I think Claude.

Reign supreme by a decent margin, but I hope it's beaten because every time a new model pops out, like, it's time to build. Like I get the itch too. And I imagine all these other engineers and creators are thinking the exact same thing of like, does this unlock for me? What can I do now that either was too expensive or too difficult to pull off at the intelligence level? Couldn't. So yeah, it's a world of opportunity out there right now.

Bennett Bernard (1:00:02)
Yeah, that's cool. Lots of updates going on with the models and speaking of updates, let's segue into the next thing we want to catch up on. And you mentioned last week that you were doing some interviews and I guess, you want to give any updates on how that's been kind of dipping your toes back in that world and and any news to share?

Bradley Bernard (1:00:21)
Yeah, a little different. the last time I interviewed was 2021, before joining Metta. And so at that time, there's definitely a few companies that at that time I was like, you know, apply to X or Y or Z and they're pretty known mobile companies. So I've been an iOS engineer by trade, but built web apps on the side for what feels like forever now, maybe 10 plus years. and so this time

Took a look back at like my Excel doc that had all the job application that I would job applications that I was going for last time. And probably, you know, 20 or 30 % of those companies are now hiring iOS, but outside the U S so that means like Mexico city or Brazil. So there's a little bit of a different environment. So I think it's kind of cutting costs a little bit, finding like more cost effective labor elsewhere. So there's like senior iOS rules

IOS roles and it's not specific to iOS, but I've seen a trend of these companies kind of shifting outside the U S to see what other talent is out there. on top of that, they're building offices in these areas too. So it's not like fully remote. It's creating like a whole community, a large investment like India, Brazil, Mexico city, or are the main ones that I've seen. but yeah, it's a little different. And on top of that, like I'm in Irvine now, much different than being in the heart of like South Bay and like the San Jose

So when I look at these companies during COVID, everyone was remote. There was no question, like if you worked at a company, you're doing video calls and that's how life was. Now it's a little different. So there are companies that in the COVID era said remote is here to stay, we're a remote company, remote first, kind of rebranded themselves because at that time a lot of people were job hopping and that was a plus.

Now it's kind of coming back the other way. so like return to office, RTO is very much in effect. There's a lot of companies again that I have wanted to work at for a long time, but either they have an RTO policy. So it's a little bit more difficult for me. and it's a little bit of a surprise too, because I think a lot of these companies will say they're remote or like they were remote in COVID.

Then I hop on a call with a recruiter or like learn more about their position. And it's, you know, back in office three days a week, five days a week. usually a hybrid model, but at least some days in office. And for me, when I was at Metta, I didn't go in too often. definitely was comfortable at home. Wasn't a super far commute for me, but felt productive and efficient and did a lot of great work at home. I did go in occasionally to like socialize and do all that. And I really enjoyed it. And now 12 months of working for myself.

I definitely have like that social, like I want to get back out there like an in office job. You know, it doesn't sound as bad. I think when I was pretty focused on remote work, I was very comfortable.

Now I've done remote work and a little bit more of an isolated context. So my thoughts are, yeah, like remote is fine. I'd be happy to do it, but in office sounds fine too. I used to be a bit more opposed to it, but now kind of opening things up and thinking, you know, like with the way that things are and the trends that I'm seeing in office, you know, it was maybe a 50 % chance for me wherever I land in my next gig. Again, probably remote preferred, but like I would be happy to have a cool team. because I don't even remember what it's like to be

office like on a consistent basis. Like I think the last time I did that was pre COVID and it was normal. Everyone went in five days a week. Like if you work from home, it was kind of like, you know, are you getting a house repair? Like, are you sick or whatever?

Nowadays, or at least when I was working for Metta, like being remote was very normal. Everyone was remote. Now, as we come back to like the RTO status, it's going to be a little bit different. And I think a lot of these companies have pushed people back to the office, but some have done a stronger effort to do that. So, Hey, like your performance will be impacted or you might be laid off. Others are kind like a soft push. So, Hey, we're coming back to the office at this date.

If you're remote, like you're fine. You don't have to come back if you're within, you know, 20 miles of the office, come back. So yeah, definitely a little bit different. I think it used to be a lot easier if I'm being frank, like interviewing in, out of college interviewing after PayPal interviewing, after LinkedIn, companies

eagerly hiring in office, but that was the norm. So it wasn't that different. Now it feels like there's a shift to do offshore hiring, a shift to do back in person. So the status quo of the 2021 COVID era has shaken things up quite a bit. And I think when you're not job seeking in the software world, it's a little bit easy to be removed from it and not like, you you're not looking at job postings every day. At least I wasn't like only when I was searching for a job, was I looking what's out there.

Now, as I've been talking to friends and telling people, people are definitely a bit surprised. Like it's a little bit of a, wow, like that's how things are. Like for better and for worse, like depends how you read it, but it's a change up and it's something that I think a lot of people

are struggling with, because a lot of people got laid off in the software world over the past two years. A lot of companies like aren't hiring too. There's plenty that I would like to work at or think highly of and I go to their positions and they're just not hiring at all. So it's like lack of hiring, location change or like RTO change, then like hiring outside the U S so all of those things considered definitely a different job market. However, there's still plenty of opportunities out there that I'm excited about, but I think coming into it a little bit of a wake up call to say,

You know, things are different, some for better, some for worse. But at the end of the day, it's like, there's still companies that I'm excited about and, talking to, know, talking to now and happy to move forward with. But I think my impression of like getting interviews faster or being able to apply to

You know, the quote unquote top company is in hearing back really fast. Like things are just a bit slower, a bit different. And I think if you're like a recruiter in the space, you probably felt it. But if you're an engineer, you'll definitely feel it once you start applying. It's not something you can really, you know, do or feel unless you're in the thick of it. And as I've been starting, you know, over the past few weeks, I felt it and yeah, I post on LinkedIn kind of said, Hey, I'm open to

I got a ton of reach outs. Like I was extremely shocked at the power of the network because so many people reached out to me and said, Hey, I'll talk to this hiring manager. I'll talk to this director.

See if we can find a spot for you. And to be completely honest, I wasn't like thrilled about posting that I'm back to work, like doing my own thing. It feels slightly embarrassing, but that's how things are. So not like too worried about it. But again, so many people reached out and I was like, this is awesome. Like I've never really asked for help from my network. And so this is the first time I did it. Got a bunch of referrals, like companies that I think traditionally I cold applied to. Like I almost felt like, referral feels kind of cheap. Like you're not getting in by yourself this time. I was like,

If you're going to help me, like I would love a referral. just kind of went for it. put myself out there. Like I'm confident that'll do well in the interview process for the best of my ability. And like, that's all you can ask for. And so, yeah, it was great. I was really happy to see people reach out and help out. Cause I think in this job market, every little edge kind of gives you a lot. There's so many candidates, recruiters

Bennett Bernard (1:07:09)
Mm -hmm.

Bradley Bernard (1:07:27)
like overwhelmed with applications. There's some applications I've had that like are still unanswered and either they have too much of a stack or they're just like not hiring at all. Who knows? But it's really competitive and it's an absolutely different landscape. So you have to be really well prepared.

Bennett Bernard (1:07:42)
Yeah. Yeah, that makes sense. And it's interesting as you're talking, I was thinking, I don't know if this is true or not, but I feel like it either is true or it's close to being true is you've been remote more than you've been in office like you. I want to say working because I can't remember when you graduated school and started working professionally like full time, but it's gotta be pretty close.

Bradley Bernard (1:08:02)
This was like July 2017, so if we think about that till, when was COVID, March 2020 or something like that. It's close, yeah. Yeah, yeah, it's definitely close.

Bennett Bernard (1:08:12)
Yeah, so it's close. It's close if it's not there. I don't I don't do the math, but yeah. Yeah, and it's interesting with hybrid, you know, I feel like, you know, obviously companies need to make the best decision for what they think their workforce will do and respond to and all that. But, you know, I do the kind of lot of companies that were remote and then switched to hybrid just feels like they're going to eventually just switch back to full in the office. get hybrid is the one to me that makes the least amount of sense. It's almost like either commit to remote or commit to being an office.

Bradley Bernard (1:08:35)
Yeah.

Bennett Bernard (1:08:41)
hybrid is like, can't take advantage of like the benefit of like maybe working in a different state that has a lower cost of living and like, you know, be able to kind of move your family where you want to move, but still be with the company you want to be with. But then being in office, if you're going to say like, Hey, being together is really important to our culture. Then like we need to be in office. And it seems like that would be like a universal, like we need to be in office all the time kind of thing. If like we're really just driving the productivity is better kind of pitch on that.

You know, again, it's up to the companies, of course, you know, think people, companies spend a lot of time and resources trying to understand what is best for their workforce. And I think they make that decision independently, what works best for them. it's, you know, totally fair. Cool.

Bradley Bernard (1:09:25)
Yeah, I've seen the trend of mostly smaller companies.

being in person like open AI, Anthropic, these companies are like San Francisco, five days a week, you know, and I get it. I think they're, kind of grinding it in that phase of a startup and high pressure. lot of the, there's definitely a handful of companies that are full remote, like manage your own time. We trust you high confidence, high autonomy, but like still have a high bar of like, you need to get stuff done. You need to be, you know, prioritizing correctly, executing correctly.

For my time at Metta, that was mostly how it was. There was always too much to do, felt, at Metta. And it was choosing the right things to do and then executing it to your fullest on those projects. And so if looking for a new opportunity, definitely talking to various sizes of companies, seeing what's out there. But at the end of the day, there's so much that goes into it, like product fit, people fit.

like what tech stack they're using, especially on the mobile app side, there's surprisingly a lot of different architectures and languages to use that finding one, if I have multiple options and choosing the right one, it's going to be big. then a lot of it's like vibe check. Like if you like having interviews, talking to recruiters, like you can tell when people aren't interested. You can tell when people are kind of doing the bare minimum. I try my best to be on the opposite side of that. Like I usually always send follow -up emails, like try to respond quickly.

And again, a little different for me since I don't have a full time job, like I work for myself, so I have more time to spend to get things right.

But I feel like going the extra mile, being nice about it, thanking people for like, you know, what their quote unquote job is, like never hurts. Again, I feel like when I posted on LinkedIn and people reached out to me who I didn't expect, like I really didn't expect a lot. I was thinking, like probably cashing in like oddly enough. I'm like just being a nice person and like helping others. Like, you know, it just, just, it all goes around. And I think in the interview process, it can be easy to.

I don't know, not appreciate people for doing a lot of work. Cause there's a lot that goes into it. Scheduling, talking, taking engineers time, like stressful for them, stressful for you. I've been on both sides. Like if a candidate isn't doing well in an interview and an engineering interview, it can be like not frustrating, but you like, you want to help them. Like you want them to do well. Everyone wants people to do well. So it's a lot of effort, a lot of time, like a lot on your mental. And so it's just like thanking people and going out of your way to make sure people are comfortable and happy. Like I feel like it really goes a long way.

both in the workforce and just talking to companies.

Bennett Bernard (1:11:53)
Yeah, totally. And I think that extra effort, know, it's a little bit, you know, effort is free and being kind is free. And so just if you can do that, it's always I think it's easier said than done. But yeah, I agree. It's definitely appreciated when it when you see it out in the workforce. Cool. Well, let's let's kind of wrap this up here. We'll kind of go over our interesting bookmarks and topics for this episode.

I will start and I'm not going to spend too much time on mine actually, because mine is very much what we talked about already with llama. So I, you know, for the sake of not being redundant, I just have a link from ours, technica .com about, you know, the release of llama four or five B. and so it's a great read. Again, it goes over a lot of things that Brad and I discussed. One of the things that I will say that we didn't really touch on, we talked about the costs and how it'd be expensive, but in the article,

there's a direct reference to how expensive it is. And I think someone estimates that it would cost something like 30, $300 ,000 a year to like deploy that like on a full scale with like two H 100 Nvidia servers. I don't know how much those are individually, but this person saying, Hey, renting two of these H 100 servers for the year will cost you around $300 ,000 a year. And I remember being like, wow, that's crazy. Like that's, that's wild. So,

Bradley Bernard (1:13:10)
Yeah.

Bennett Bernard (1:13:13)
Yeah, so I that was interesting. But then again, there's lots of other good details in there in the article. So we'll link that in the show notes, but I won't go too much further into it because again, we talked about it already during the show. But yeah, what you

Bradley Bernard (1:13:24)
Cool. Yeah, so this week I picked up a Twitter bookmark from Martin Bowling. And so he created a project that essentially optimizes your prompt for a smaller version of Meta's Llama model. So as we talked about, they have the 70B kind of mid -tier.

good enough, decently cheap, but also a little bit slower than they have the 8B model, which I think I might've said 7B, but 8B I guess is the official size, smaller, dumber, but cheaper. And so he created a tool and what it does is it takes your prompt and it optimizes it for their 8B model.

And so again, I haven't used it, but the thought process usually is get the smartest model, figure out your use case and then scale it down. How far down can you go on the intelligence level that your use case still works? And so I think his tool is an excellent, you know, tool in your arsenal to say, Hey, like my use case works with 70 B, but it costs me maybe $2 per million tokens and tokens would be characters in your prompt. But if I can bring it to llama 3 .1 eight B.

Maybe that costs me 50 cents per million tokens. And so you can take that same prompt that you're working on, throw it into his like prompt conversion tool. That'll then fine tune it to make sure it works, you know, just as well or better on eight B and then test your use cases. You can go side by side, throw in a few queries, see the responses. And it's the goal of how far can you go with dumber intelligence? I think this tool can help you scale that down much quicker. So yeah, pretty cool. I'll link in the show notes, but again,

start at the high, end up at the low, that's the goal. I think if you can make that easier, who wouldn't want something like that? So this will be a nice tool to have.

Bennett Bernard (1:15:13)
Yeah, it helps with that. That tricky prompt engineering that you mentioned before about getting that balance right. Yeah, totally. It's a good skill to develop. Cool. Awesome. Well, let's wrap this up. We'll put all our links in the show notes and get it all up on the usual places. But yeah, good stuff. And we'll do this all again next

Bradley Bernard (1:15:17)
Really hard. Yeah. Yeah. Cool.

Sounds good. See ya.

Bennett Bernard (1:15:34)
Cool, see ya.

Creators and Guests

Bennett Bernard
Host
Bennett Bernard
Mortgage Accounting & Finance at Zillow. Tweets about Mortgage Banking and random thoughts. My views are my own and have not been reviewed/approved by Zillow
Bradley Bernard
Host
Bradley Bernard
Coder, builder, mobile app developer, & aspiring creator. Software Engineer at @Snap working on the iOS app. Views expressed are my own.
Building useful bots & open source AI
Broadcast by