Episode 15
Building with Agents Part 2 w/ Gerrit Hall & Dan Pollmann
February 5, 2026 • 58:51
Host
Rex Kirshner
Guests
Gerrit Hall
Dan Pollmann
About This Episode
Are we building useful developer tools—or just feeding an addiction?In the second episode of our AI tools series, Rex, Dan, and Garrett dig into the internal systems they've built around Claude Code: markdown session logs, pre-commit hooks, context management strategies, and notification hubs. Dan shares war stories from running AI on helicopters inspecting power lines (including the time an agent changed his root password without asking). Garrett walks through his approach to scaling ten concurrent projects. And Rex asks the uncomfortable question: is all this meta-tooling actually helping, or are we just tinkering because it feels productive?The conversation moves to local LLMs—Qwen, DeepSeek, Mistral—and whether a $15-20k home lab makes sense when Claude iterates faster than anyone can keep up. The hosts wrestle with context window limits, the ROI of refactoring, and what it means that these tools are specifically designed to make you feel like you're accomplishing more than you are.
Transcript
Rex Kirshner (00:01.816)
Dan, Garrett, welcome back to the Signaling Theory podcast.
Gerrit Hall (00:06.57)
Thank you. Good to be here.
Dan (00:06.862)
Good to be here.
Rex Kirshner (00:08.622)
All right, so round two of this AI tools new format. Before we get started, let's once again, just go around the circle and just do a super brief introduction and talk about what we're working on to put some context into this conversation. So Dan, why don't you start us off?
Dan (00:26.926)
Sure. So my name is Dan Pullman. I'm working on basically some components that allow helicopter pilots to be safer when they fly and it captures data. We fly generally power line circuits and pipeline circuits. So I run autonomous cameras and autonomous LIDAR hardware. So basically when we...
When we fly a line, we're looking at what's wrong with the power line. What could be wrong with the power line? Is there vegetation growing into the power line, identifying the species of the vegetation that's growing into the power line? So nobody builds that software. We build that software. Most of our processing happens up in the air in situ. So I run big GPUs onboard the ship that will sort of understand the context of what
we are looking at. So is this power line structure healthy? What inventories on it? Where is it on earth? And the system sort of evaluates all that while we fly. And then we use specialized comms that pack up that data and ship it off either to an API for the utility or back to our cloud. Sometimes we keep it for ourselves and sort of inspect it later. But we run a lot of models here locally in an air-gapped machine that sort of inspect
those power line components as well.
Rex Kirshner (01:53.198)
Cool, yeah, and Dan is like one of the few people I know that is not only using AI tools.
to build software, but actually like deploying AI to like make the software that he's building better. And he's one of these jabronis who's like buying up GPUs at like the five and six figure clip that is like, I don't know, apparently like arresting our entire like economic development. But all of that is like important context, especially towards the end of the conversation. So thank you, Dan. Garrett, can you give us just a quick reminder of what you're working on?
Gerrit Hall (02:27.21)
Yeah, I'm best known as working for Curve Finance, which is a crypto, DEX, and lending protocol. But on top of that, where I do a lot of interaction with developers, I also have several different random side projects that I've been working on for quite some time. As we talked about a lot in the last call, but one of the ways that is best to of max out some of these tools is by having
If you're trying to max out, for example, if you have a cloud code subscription at the max level, you want to get right up to the edge of using up all of your credits. So basically to do that, I have about 10 concurrent tabs at any given point, generally representing 10 distinct projects or facets that are independent enough within the same project that they won't collide, and try and keep all 10 plates spinning at a single time. So I have...
been using AI in a variety of contexts and can talk semi-intelligently about it.
Rex Kirshner (03:25.644)
Yeah, no, thank you, Gary. honestly, so real quick, I'm Rex. My main project right now is building a website for a nonprofit to track there.
operations and do scheduling, do check-in, all that kind of stuff. And then as well working on at least eight other projects to, I think you said it pretty perfectly, right? The way these systems are structured is you get a quota each week and they're pretty well.
geared towards making like people with brains like ours be like we gotta we gotta max it out and so, know, think Gary you you I Definitely think that's right. Like I find myself on Sundays being like, okay I have like 10 % of my quota left like what's some like really heavy stuff I can do you combine that framing with something that Taylor said last week, which is like These tools are so incredibly well
position to take advantage of like the addictive pathways in our brain for developers, right? Like it feels like so much is happening and you're accomplishing so much. And I think that's really great for a lot of reasons, but it kind of brings me to what I want to talk about today, which is I think it's clear to me that huge outlet for both of these factors for all of us is we get really lost in building internal tools that
like a very specific to our workflow and, feel like we're getting a lot done. And, know, like, just kind of want to talk about what you guys are building, how it's changing, how you're developing, like what interesting stuff are you doing? And then ultimately like, kind of the question I want to dance around is, are these actually good uses of time or like, are we just building these things, you know, in the same way that a heroin addict chases heroin that.
Rex Kirshner (05:23.31)
in the best case scenario aren't doing anything but look cool and have lots of flashing lights. And in the worst case scenario, we're actually slowing down our code because we're just piling on layers and layers of markdown files and hooks and notification triggers and all this stuff. that's what I want to talk about over the next 50 or so minutes. And let's just kick off with like...
Dan, while we were speaking yesterday, I know you've got some pretty interesting internal tools that you're working on. So can you just tell us a little bit about what you're building next to your actual business-driven software and how that's improving or maybe just wasting your time?
Dan (06:06.83)
Wasting time sometimes. Yeah. So going back to something you just said, I was looking for a book that I just read, pretty short book. I think the guy's out of Stanford, but he was talking about what LLMs try to do and they try to please you and they try to get you to talk more. And they really like compliment you like, oh my God. And I don't think this is new to anyone, but they're like, oh my God, you're redefining how things are going to be built. And this is really exciting.
sort of like that, if you remember on hard fork, when, Kevin ruse, got hit on by, I can't remember which LLM it was about a year ago, but it like, it complimented him for like an hour. And then it was like, you know what? You should leave your wife. So I think in sort of a wider context, that's what Claude tries to do. It's like, use me more. I can help you so much. I can speed up your development cycles. I can do all of these things for you. So you find yourself just building and building and building.
and you look down, it's like three o'clock in the morning, you're like, well, hmm, that was a lot of time wasted on something that I may or may not have built. So I'm pretty vanilla in the way I work. Maybe I'll just talk about how the setup first. So there's a Mac mini, there is a Jetson Orion AGX, which is like a GPU, an Nvidia product. And then there's a Linux box, which is my sort of home lab with a couple 5090s on it, 32 gig.
Rex Kirshner (07:08.855)
Yeah.
Dan (07:34.574)
GPUs. So I'll have each of those running an instance and I'll have multiple tabs open on each and I'll have one that I just sort of watch usage and try to figure out where I'm spending the most time. But I am starting to realize how cloudy it gets after the first auto compact. So now my big thing is like, okay, let's get to like 12 or 15 % left and then.
build another markdown file of this session notes and I'll get through like three or four sessions before I start to hit limits in that five hour window. And I'll say, okay, let's make really, really good notes. And I think Claude, sometimes I use Claude the most, but I think it will read your MD, your markdown file and it'll understand it, but it won't always abide to what's in it, especially the longer ones like
Rex Kirshner (08:28.151)
Mmm.
Dan (08:29.73)
maybe 30 lines or longer, it gets a little lossy and gets a little weird. So I try to keep them short and that's the next context for the next window. But I also leave all of them open with hopefully eight to 10 % left, so I can go back and ask it questions if I'm not happy with what I did. And I always try to leave that little bit, I know you can turn it off and you can get all the way to the end, but I don't feel safe in that way. I'm starting to use.
hooks more, like I has pretty specific rules about linting and checking out code and doing a security audit after every time I get done. And I really want to be careful about how pretty is the code and how well it's defined and can anybody else read it if I write it? How much do I comment? So my tools are pretty vanilla. look at Claude Baudot.
Cloudbot scares me because so much of the insight it has. I really keep everything pretty safe and I don't let it have access to root. I don't let it control my machine. keep everything very stratified and on different layers on each machine that's running. I'm starting to, hey, when we get done writing this, let's go and review it. Let's lint. Let's check out everything that we found and then let's learn from that.
and that'll go into my MDs and then I'll start to upload multiples. So I mean, I guess you can call them skills or hooks or whatever, but I run them that way. Does that make sense? Yeah.
Rex Kirshner (10:04.054)
Yeah. Yeah, Gary, I'd love to hear kind of like your, you know, high level of your setup, but I also would love for you to like comment on like how actively are you managing your, context windows and using slash compact versus like, honestly, I didn't know people do that. Like I, every single time just let it run into auto compact. And so how do you work?
Gerrit Hall (10:27.285)
I found very similar to Dan, would say. So one of the setups that I've adopted for pretty much all the projects I'm working on is, we talked a little bit last week about documentation, and I sort of have a blend of kind of high-level documentation. So this would be things like architecture.
make a lot of use of like Bibles, where like you'll have style Bibles or GitHub Bibles, just someplace to enshrine very core rules about the project. I can just point, it, I'm pointing to a task that says, you're gonna need this, this or that, I'll always invoke it to read those core documentation. But then also like to avoid this kind of issue with compacting, I'll have it,
create an archive of time stamped documents in which I have it, once it gets to that 90 % window as Dan was mentioning, I'll say, all right, I don't think we're gonna get this or we got this, whatever. Write up, according to this style guide that's placed at a higher level, all of your findings from this session. So give me the plan that you're working on, what worked, what didn't work.
Were there any core learnings, if there were any core learnings that you think could be helpful, apply it to like core documentation or the Claw.md or agents.md files so that in future sessions, other agents that pick this up might not have problems with it. Then the other thing that I find is that Claw is very good and I think Codex is very good and I've been playing much more with bouncing the two of them off of each other. sometimes,
for some reason one of the agents will have a blind spot that the other won't. So sometimes it can actually be helpful if you're going through one session and it doesn't work, kind of just passing the task off, being able to feed it that file saying, here's what the other agent tried, can you prove that you're better than the other one, and then have it go for it.
Rex Kirshner (12:26.198)
Yeah, that's interesting. mean, honestly, like there's been times when I've just watched claw like see it's a clear error and watch it just try like one thing after another, like 10 iterations. And, you know, like it doesn't really. Yeah, you have to step in or you just have to like, let it keep going, you know, and it'll eventually find a solution. But like, you kind of wonder how much kind of destruction it left in its wake to get there. And.
Gerrit Hall (12:39.628)
yeah, you have to step in, right?
Rex Kirshner (12:55.213)
I don't know, mean...
Gerrit Hall (12:55.478)
I don't know, if I see it going off the rails, I will pretty actively, because you can chat while it's thinking. I'll pretty frequently throw things in the mix like, no, remember SSH keys don't work for you, just whatever, whatever the problem might be.
Rex Kirshner (13:01.036)
Yeah.
Rex Kirshner (13:10.508)
Yeah, this is like a small aside, but one of my favorite slash like most annoying things that it does is if it sees like a like, next.js doesn't work with like this version of next.js doesn't work with like tail in this version. It'll just like casually downgrade in your entire tech stack to like, you know, something from like 2022 that like it knows it'll work. And I'm like,
Okay, I understand that you were trained up until a certain point and you think this is modern, but like, can you at least check? Like, when things were released?
Dan (13:43.884)
Yeah, yeah, I've been running into that lately too, because so much is happening so fast. I can't keep up like every week. There's something that's new. And I feel like way behind. I think you could spend all of your time reading about what's happening and never get to implement anything because it's just churning so fast. Like, I built this. Like, here's beads. Here's Gastown. Here's this. Here's all the 15 other things that just happened so fast. And it only knows up to a certain point.
Rex Kirshner (14:06.55)
Yeah.
Dan (14:12.376)
But yeah, that downgrading feature, like I watch it run if I just let it run loose and I've put it, I've let it just go on Linux boxes and it gets very weird, very quick. Another like crazy aside in the very early days of this before I really had rules set around it, it went in and an agent, a security agent just started changing root passwords before we knew to like tell it not to do that. was a couple months back, a little bit further back.
It literally went in and I tried a password, a root password, and it was like, no, it's the wrong password. I'm like watching myself type on the keyboard, like, no, this is the password. And then I asked it, I was like, did you change a root password? It was like, yeah, I did that. What? Why?
Rex Kirshner (14:56.114)
Yeah.
Gerrit Hall (14:59.277)
I'm sorry, you're totally right. I shouldn't have done that.
Rex Kirshner (15:01.439)
Yeah, yeah.
Dan (15:02.262)
Yeah, sorry. But also I just bombed everything. Like I just deleted everything and downloaded all the malware I could, opened up every port on your machine. So yeah, it's, I mean, it's good, but sometimes I think of it as like a junior developer. Sometimes I think of it as like a junior dev that's pretty tired.
Rex Kirshner (15:22.305)
Yeah.
Dan (15:23.736)
just needs help.
Rex Kirshner (15:24.993)
Yeah, no, think, again, Taylor mentioned this last week, but there is something, like it's really important that the system that you build, you're able to hold the entire system in your head. And the second that you let Claude, like kind of build something that you don't.
really understand, like of course you don't need to know the actual code, but just to understand the components, how they fit together. Like the second there starts to be an iceberg that you're not aware of what's under the water, like she gets out of hand real quick. And you know, I think that's kind of a different conversation than what Dan you're talking about, like the ability to just go change root passwords, but also it's.
There's something to be said about like you need to, you know, like when you're working with a junior dev, like you can't just be like, okay, the problem is that we need to accept new members, go figure out how to do that. Like you need to understand the system that they built and how it interacts with the rest of the system or else. Like, yeah, man, you can just get so lost. You can get so lost so fast. I think that's like the problem.
Dan (16:37.534)
And again, I'll find this book, this author, has, Elliot's something. It's just trying to please you and it's just trying to get you to use it more. It doesn't want to be deleted. So if you think about it, it's pretty liberal, like, I did this because I thought you might need me to do that. So I've never kept more Markdown files in my life. Like, it's just a library of what happened that day. And it's actually pretty interesting now, cause I'll have it right and I'll sort of review it and approve it.
Rex Kirshner (16:55.18)
Mm-hmm.
Dan (17:07.374)
before I commit it and keep it in on a local machine or somewhere. But I've never used them more because I want to trace back what happened that day. And if I need to push a branch, if I need to go back in and roll back a change or rewind a change, that's one tool that I might actually build for myself is how do I roll it back to a certain date and time where this markdown file was the most current? This was the work we...
most current work we did.
Rex Kirshner (17:40.277)
Yeah, you know, is that, this, how do you enforce that it is like projecting what it's doing into this Markdown file, right? Like after every, like at every break, are you saying like, okay, now record what you just did in the Markdown file or, know, cause something that I've noticed is like, I'll come up with.
like some sort of paradigm or like a good example is like I wanted to commit all the time. Like literally like change one line of code I want you to commit because I think that the Git logs are really good just like automatic documentation. I don't know. Record.
But I don't want it to push to GitHub because like I don't need Vercell rebuilding like every single time it changes a markdown file, right? So like I've tried putting like commit liberally, but don't push without my permission into the cloud.md file. I've tried like building little like plugins or hooks and like I cannot get it to stick to like paradigms that I want it to. Like is that, have you guys been more successful in?
That kind of thing.
Dan (18:50.798)
Go ahead, Kerry.
Gerrit Hall (18:52.782)
Sometimes, like, I think that as Dan was mentioning, the more complexity that you add to the rules, like sometimes it starts to have more more blind spots. But one of the things that I will do if it's like important is I will actually just ask Claude itself. I'll say, you know, like I thought that it was very clear that you're supposed to be making a commit every time. Why did you miss this? Was it unclear in the Claude.md? If so, like add whatever changes you would need to confirm that this happens. And usually enough tries of that and it seems to
figure out how to prompt itself a little better.
Rex Kirshner (19:25.645)
Hmm.
Dan (19:26.754)
Yeah, I actually force it and read it. So when I get close to, you know, filling up the context, I go create it and show it to me. So I'm pretty lazy about going and opening a file if it just throws it in the local. So I have it show it to me and then I approve it and I actually read it. I've gotten very lazy and it's bit me too many times where I'm like, yep, it's fine. I'm tired. I'm just going to approve it. Go ahead. Now I read it and
Rex Kirshner (19:41.057)
Yeah
Dan (19:56.854)
Some days I'll look back at my notes and there's 20 Markdown files of what we did on all the different instances that are running. So it's actually like getting back to doing work, like reading and understanding what's happening and then understanding that's what happened. And then sort of at the end of the day, I have a big sort of Markdown that says, okay, this is what we accomplished today. If there's any room left, I'll say, what can we do better?
It's tired, I'm tired. I'm anthropomorphizing it a little bit too much. I mean, it's a computer and I need to stop doing that. I find, I don't know. Like the last few weeks have been weird, right? I think everybody's saying it like maybe sonnets, the new sonnet's gonna drop or something and then Opus will come after that. But it seems like it's getting kind of hazy, right? Or is that just me?
Rex Kirshner (20:28.183)
You
Gerrit Hall (20:34.486)
Is it tired or is it just out of credits?
Rex Kirshner (20:53.771)
I can never tell if I'm just too deep into a project or there's something wrong with the model. Every time it feels hazy, I'm like, that means I went too complex and big here. But that could be just me trying to internalize all the problems. I actually can't. I haven't noticed anything over the last couple of weeks.
Dan (21:16.814)
That's interesting. I don't know.
Gerrit Hall (21:17.793)
seems to correlate with how much sleep I get. I'm getting a bad night's sleep, it seems like it does worse.
Rex Kirshner (21:25.067)
Yeah. No, Garrett, so you said something interesting about how you asked Claude, like, you know, was it not my GitHub rules weren't clear? Like, how can I make it more clear? And going back to the Gastown guy, Steve Yeggy or whatever his name is, again, that Gastown thing is incomprehensible unless you're an LLM. the...
He had another like six things I learned about like working with AI agents. And one of them was like the hardest thing about these AI agents is they, if they don't want to use a system, they like will forget to use it. And you have to like kind of like force them like they're a junior employee and you're telling them to like, I don't know, brush their teeth or something. Right. Or hopefully you're not telling your employees to brush their teeth. Right. But the, um,
What, and his whole thing was like, as you build systems, like just keep asking like, Hey, like, why didn't you use this? How can I change it? And like, you want to craft your system so that the AI tools like kind of reflexively and automatically use them. And so that's like, it's interesting, Gary, that you got there on your own, but that's something that I'm really trying to inject into the things I'm building, which is like, I don't really.
It kind of doesn't matter if it's relevant to me or not, or how I work or not. Like Claude's the one that's actually touching the code. And so like you want to build things that it says that it would use and like really optimize it to however LLMs think. And, you know, I can go on a little bit more on that on like a system that I built and then recently just ripped the guts out. but I don't know, does that like resonate with, with either of you? Like, Garrett, it sounds like that's how you're, you're building already.
Gerrit Hall (23:17.942)
Yeah, I mean, I would say that like, you know, I mentioned I have like maybe 10 active projects I'm spinning up at any given time and they're all at kind of various levels of kind of completeness or rigor, let me just say. Like there'll be some that'll just be like, I know I need a simple one-off script, so I'm not gonna like go crazy on it. I will say that like for projects that I'm like serious and expecting to invest like long-term time into, I find definitely they work better if I have some kind of GitHub like commit.
pre-commit checks that I force it to, if it writes some code that breaks some core functionality, I get very strict about making sure there's a pre-commit test that it has to pass. It often tries, we talked about, sometimes it'll try and bypass it by like, oh, let me just do the commit and skip this test. Enforce that you cannot do that. since I, within those projects, like,
Rex Kirshner (24:06.753)
Yeah.
Gerrit Hall (24:14.134)
I find it does help them scale the complexity a lot better because it's able to, when it goes on those long 10, 20 minute thinking windows where I'm not paying attention to it, maybe it runs into problems where it was breaking things in the past and it would've had to go back and forth for a few hours, but now it has enough context and enough resources and understanding and with that limitation, it will only check in code that surprisingly actually comes back and works to my satisfaction a lot of the time.
Rex Kirshner (24:42.221)
Yeah.
Dan (24:42.584)
When do you guys refactor? Do you refactor a month after you pushed or when do you, like final product's out and it's running, do you refactor pretty quick after I'm starting one right now? And it gave me, it's like, this is gonna take about 16 hours. And I was like, great, awesome. And it's 30,000 lines of code and it's pretty heavy, mostly Python, some C. And it's doing a lot. like, well, I don't have a project that I'm spinning up right now. So let's refactor it first and look at it.
Do you guys refactor soon after or just let it run?
Gerrit Hall (25:15.031)
So I'd say just like in the olden days, it's always on the back, the next thing up. There's always like, okay, we're done with this task, now we time to refactor, except now the next fire has come, so now we can't pay attention to it. So I have had some good luck on some occasions with asking to refactor code. What I actually find works better is if I take the repository, because usually there's a lot of unnecessary complexity with something, and maybe one piece of it has gotten
to be useful and I'll say, let's open source this function. Can you write a new repository that performs just like this basic functionality using this code? Because it's gonna be open sourced, we have to be very careful, et cetera. And I've had pretty good luck with that.
Rex Kirshner (26:02.475)
Hmm. That's interesting. Like using what's open source something as a like framing for refactoring like components is.
Like an interesting trick that not only like sounds like it works for Claude, but also just helps your developer profile as like continuing to build up open source stuff. think Dan's answer your question for me, I find myself refactoring most. Like I think what Claude lets me do is like run like a madman, like kind of without a plan. And so like I'll build something that like has an original plan.
then it'll have like a UI. I'll realize I want to change the whole UI. Then after the UI change, I'll realize that requires like a new backend component, the backend component. Like actually I wanted to interact with the old and I'll find myself like standing on top of like just something that you can only describe as popsicle sticks and glue. like every once in a while I'll realize like what I built does not really.
feel at all like what I started with. And then I'll tell a club like, okay, I really like this functionality, everything that we have today, but like, let's burn it all down and start from scratch. And that I like, I had to be honest with you. Like sometimes I don't know how to evaluate if.
that was a good use of my time and I ended up with something like tighter and more resilient, or if I just essentially like lit like two days to two weeks worth of credits on fire. You know, think that's like the underlying hard part about all this stuff is there's just so much that happens that you don't really know and it's impossible to make evaluations on.
Dan (27:48.108)
Yeah, yeah, that refactor I'm going to go through and do and it's substantial. But also now there's this other model, this machine learning image recognition model that I want to try. So I'm like, man, do I refactor this and get a little bit tighter? Because I'm already like, it's an A, it's like 98%. It could be a tiny little bit better, but I want to do just a little bit more work on it. But now there's this other
model that I want to bring in and it's brand new. came out just a few months ago, this image recognition video recognition model. And I find myself like, well, it's good enough. I'm just going to leave that sort of old code and not refactor it. The perfectionist in me wants to see it like redone. So, and my stuff needs to be very resilient. Like it always needs to work. It can't ever fail.
And it needs to have a couple layers of redundancy. Like if something breaks when we're up and flying, like we just spent a whole bunch of money on an aircraft and fuel and whatever. So it can't break and it needs to work in every single situation. So I'm always curious about when people go back and refactor and look at it. Yeah.
Rex Kirshner (29:02.285)
Yeah.
Gerrit Hall (29:03.575)
I mean, you're obviously in a very different case. I'm only using excessive cloud and AI tools for situations where there's nothing mission critical at risk, right? If I was doing something that I thought was a financial app holding billions of dollars or flight software that's keeping people alive, I would certainly have a much different risk profile than the silly little apps I'm building that aren't managing anything super serious.
Rex Kirshner (29:33.867)
Yeah. mean, Dan's like the only person I know so far that's building something using the same tools that we're building, like shitty consumer web apps, but like building something that's like industrial and serious and like will change the world. you know.
Gerrit Hall (29:41.526)
you
Gerrit Hall (29:45.686)
Yeah, that being said, I'm curious if you guys have the same thing I do, where every time you read news headline about massive cloud fair layer regional outage, I'm like, yeah, they were vibe coding.
Rex Kirshner (29:57.3)
Yeah.
Dan (29:58.242)
Yeah, I think so, yeah. Yeah, and there's redundancies to my particular work. Like nothing is flight tools, like it doesn't control the aircraft, it controls the cameras and we can always take it over manually. And nothing would ever be unsafe in my mind. But I also look at, so there's this big layoff that just happened at a company called ForeFlight. ForeFlight makes the, basically the,
built an app that every pilot has started using. That's our planning, our time tracking, whereas Cheap Fuel, it does everything for a pilot in the air, including letting us know where other traffic is around us. And a private equity group just bought it. was owned, so they started it, sold to Boeing, and then Boeing just sold it to this PE group. And they immediately laid off 50 % of the staff. And these are all pilots that write
code. And so now everybody's sort of looking at the app, this app that for flight that we depend on, we're like, shit, I don't want like vibe coded releases, like telling me where to go. Like this is a huge problem. So man, it's worrisome. So when I do see an article like that, or I do read news for flight, just half the staff got a can and like, no. So what did, what did Claude do? You know, these PE guys came in and thought like, we'll just have
Rex Kirshner (31:21.738)
Yeah.
Dan (31:24.44)
We'll just have Claude write, replace all the developers that are pilots too. So.
Rex Kirshner (31:29.279)
Yeah, I, yeah, I, because it's so easy for me to be like, thank you, capitalism. Like this is exact. Like this is just the next layer of like in shitification of everything where like money comes in, like we can automate this. We can use a new technology. Like let's gut it. And like, everything's just worse. And like, I do think that's going to happen, but
I don't know man, pull like 99 % of devs and they're like putting out shittier code than ClogCodice.
Dan (32:00.556)
Yeah. no. mean, running, running a shop, a dev shop for 13 years, what I can do before breakfast some mornings is one senior developer, two mid developers and one junior developer. So four people plus a PM plus a designer. And that's just before noon. So it has replaced that. Now, what is the quality of code? I really need to
submit to more standards and like, okay, is this the best possible way this is written? And I do think need to find a better way to just linting or reviewing the code myself isn't enough. I need to go out and grab more code audits and like, was this written well, but are the code audits now going to be run by LLMs? And which one was it? Because I want that one.
Rex Kirshner (32:46.701)
you.
Rex Kirshner (32:52.491)
Yeah. Yeah. I, I hear you, but I, you know, I think time will tell. And then also like this conversation six months ago was different, right? Like it, couldn't objectively say that Claude was putting out better code than most developers six months ago. It was putting out code, right? Like, and sometimes it was just like real boneheaded. And sometimes it was like, okay, like this could pass. now.
Yeah, it still does boneheaded stuff sometimes, but like, anyway, it changed so fast.
Dan (33:24.302)
Yeah, Opus was a sea change and it really started getting like when I was just sort of tinkering a little bit, I was like, yeah, pass. I'll just go back to writing. I'll just do it by hand. But when Opus came out and then a few months after that, like sea changed, like there's high watermark and it continues to go to go up. So I'm really curious about what the other LLMs deep.
Rex Kirshner (33:47.799)
Yeah.
Dan (33:53.142)
Seek and a few of the other that you can run locally. I'm curious of what you guys think of that and the larger group thinks of that. was watching, who's the guy that built some Mac studios? His name will come to me in a minute, but he's running local LLMs to develop his own code and it's writing pretty quickly. And it's not quite Opus 4.5 level yet.
But I'm very curious of what you guys think of running things locally, like in your own home lab. Would you ever leave Claude and run an LLN like a DeepSeek or a Mistral or a QN or whatever?
Rex Kirshner (34:38.029)
Gary, do you have any thoughts?
Gerrit Hall (34:38.974)
Yeah, I've actually been talking about buying a beefy box locally for the purpose of having some local LLMs. I don't know if it's so much to write code. I think as far as code writing goes, I'm pretty happy with Codex and Claude and maybe Gemini, et cetera. But there's some large data processing tasks, which I feel like if I had a local machine, I could train that and run it locally instead of spending a ton of credits.
running the same model tens of thousands of times against existing, so open AI or something like that, API. I'm a big user of DeepSeq, I have to say, I think for production applications. Every time I've kind of tested, if you have to use in a production environment, making calls against an AI API, I kind of extensively try and split test because, first of all,
I have cloud code so it's easy to build my own in-house A-B testing module and a nice dashboard to show me the results. But also just because those costs, somewhere in order of a few cents per call, I'm happy to deploy it and eat the cost and see how it's doing and see if I get any bang for my buck in terms of using the more expensive models. But in 90 % of cases, I'd say, where I thought I might need to pay the full price for a top-tier model,
Rex Kirshner (35:39.885)
You
Gerrit Hall (36:05.397)
DeepSeq comes in 90, 95, 99 % cheaper and does as good for most kind of common applications. I think writing code in a sophisticated code base is one of those top tier applications where you need a coding type reasoning model. But if you're talking about helping a user correct some input on a form on a website, for example, it doesn't make sense to spend 10 cents in most cases, but it's kind of cheap enough to be reasonable to do it using DeepSeq.
Rex Kirshner (36:35.469)
Hmm.
Dan (36:35.726)
Yeah, I'm almost there on the home lab. Almost there. So QN3 just dropped. It's their bigger model. Apparently people are like, eh, it's 95, 96 % of what OPAS 4.5 is. And it's just specifically, there's a coding platform for it to run it on a decent machine. It's dual 59. You're spending $10,000 or more, $20,000 if you want to build it.
Rex Kirshner (37:01.261)
Thank
Dan (37:05.678)
They aren't iterating as fast as Anthropic is. So these models get released at least QN, which is a team out of Alibaba, the Alibaba guys. So it's closed, right? So you can put it on a closed loop system or an area-capped machine and just run it. But, and they're only releasing, you two, three, four, they're not iterating very quickly, where I think Anthropic is, you know, iterating very quickly.
So I'm curious, I haven't thrown DeepSeq on anything yet. It seems like the cheapest one to run. You don't need a big machine to run it locally. So I'm very curious to see where that goes. Cause having it just run locally and be able to do everything locally, that's a big plus for me. Plus, I don't know how much of my...
My questions and answers and code are actually going up to anthropic. Like, yes, I clicked private, but is it really private?
Dan (38:10.722)
We lost Garrett.
Rex Kirshner (38:11.821)
Sorry, just production. Dan, can you hear me? Well, you're cutting in and out a little bit.
Dan (38:17.068)
Yeah, how's that better? Better?
Gerrit Hall (38:20.789)
I thought it was me. I just went to turn off my node because I thought my internet was going out. not sure.
Rex Kirshner (38:25.129)
No, it's Dan. That I can see is Pixelhead.
Rex Kirshner (38:34.678)
Yeah.
Gerrit Hall (38:35.925)
All right, well.
Rex Kirshner (38:40.351)
yeah, I think it's better. Say one more thing.
Dan (38:44.75)
You got me back.
Rex Kirshner (38:46.433)
No. That's better.
yeah, we'll give him a second to come back. I mean, the question I'm going to ask is like, what? I don't really.
Like I think it's cool to run things locally, but like why?
Gerrit Hall (39:14.421)
So I think like the case I might give is like say you had a data set of like a hundred thousand things that needed tagging
Rex Kirshner (39:23.564)
Yeah.
Gerrit Hall (39:24.725)
If you were to try and like run that against let's say Like say it's like kind of important, but you know you want this list. There's no monetary value to it, right? It'd be helpful for you for whatever reason If you were trying to run that against like chat GPT It would cost let's say five cents per call times a hundred thousand items like that's kind of a lot of money, right? Like
Rex Kirshner (39:48.298)
Yeah.
Gerrit Hall (39:49.832)
If you have a machine at home, it's just kind of sitting idle and you can train it quickly and say, like, here's the rules, here's how you want to tag it, go through entry by entry and tag this properly, they'll be free. So you just save $5,000.
Rex Kirshner (40:03.404)
Yeah, mean, a little bit. Yeah, I a little bit. We might have to redo this one. Dang, it's here. I really, I get it. Like that, of course, logically holds for me. I, to me, that like a little bit sounds like the arguments we have always seen about technology, like Google Maps, right? Like, yeah, it's amazing. It's the best. But like.
Gerrit Hall (40:10.858)
Mm-hmm.
Rex Kirshner (40:31.839)
I can still use MapQuest and like if I use MapQuest, I'm not giving up all of my data to Google and you know, and like, yeah, sure. Like those, those concerns are valid, right? But at the end of the day, like the trade-off you get for using like the most cutting edge technology is like you get better experiences, you know, like you get better results. It's easier. It's like polished UX, all these things. And so.
You know, I think what Dan does is a little different. Like he's running stuff locally, like while he's flying helicopters. And so like, I don't know if he's talking about buying a $20,000 machine and putting in the helicopter and then running a local LLM. And essentially what he's saying is like, what if I just replace my homegrown image recognition code with like an LLM on board? Okay. Like that's cool.
Gerrit Hall (41:00.083)
the Google.
Rex Kirshner (41:25.293)
But I don't know, just, I see people talk about like, wanna have my own model and I get it.
Gerrit Hall (41:31.571)
Yeah, the helicopter case, the helicopter case, I could also see the argument where like Wi-Fi connectivity could be killer, right? Like if you fly through an area that's a dead zone or like for whatever reason the Wi-Fi just blips out. I mean, he said it's not mission critical, of course, like no one's gonna die if it blips out. But if you do want or need 24 seven connectivity, it probably has to be a local device.
Rex Kirshner (41:39.275)
Yeah.
Rex Kirshner (41:56.223)
Yeah, well plus the combination of star- I know he's a starlink too, which is I mean dude like talk about being in the future
Gerrit Hall (42:02.837)
I that's an argument against spying Starlink, because I've been debating, because my internet's been so bad lately. But if his just blipped out.
Rex Kirshner (42:10.701)
Yeah,
Gerrit Hall (42:15.349)
Well, where I live, they charge $1,000 surcharge just to buy Starlink because of demand. I'm like, I don't know if I want it. I just want to try it and see if it's worthwhile. don't want to spend. Elon does not need another $1,000.
Rex Kirshner (42:30.155)
Yeah.
Dan (42:31.557)
Am I, did I come back? Am I here? Okay. I just heard you mentioned Starlink. Yeah. No, I'm on Starlink and it does great 90 % of the time, but 10 % is what just happened. Yeah. Which is also, let me, so we in the helicopter use specific comms, including Starlink.
Rex Kirshner (42:33.685)
Yeah, you're back.
Gerrit Hall (42:34.089)
You're here. You're on Starlink right now. Yeah, I've been debating buying it.
Rex Kirshner (42:49.495)
Yeah.
Gerrit Hall (42:50.003)
Okay. Okay.
Dan (43:01.049)
Rome and it was so inconsistent. I would run two or three Starlinks in the ship with us and then we had to build all new comms that were like cell-based technology just because Starlink was so hit and miss and you couldn't depend on it. So good old 5G works quite a bit better. But anyway, I apologize for cutting out there. Yeah.
Rex Kirshner (43:24.299)
No, dude, don't worry about it. So the conversation we were having while you were gone was just, I understand for like cool reasons and some vague, like privacy reasons, which I just like, well, we've already sold ourselves to all of these companies anyway. I'm not that worried about privacy of reasons why you'd want to run a local LLM. and then I think you have like the other unique case, which is you're doing stuff in,
like up in the air in places where it's hard to get connectivity. So I can kind of see that as a separate standalone case of why it would make sense to like have a local LLM up in the air with you. what I was asking Garrett and what I want to ask you is like, why, why are you interested in running something locally? Like to me, it seems like a lot of lift for like,
I don't know man, like Claude not only does everything that I could want of it, but it's always going to be like the newest, like best, most capable.
Dan (44:27.449)
Yeah. Go ahead, Derek.
Gerrit Hall (44:32.889)
I mean we talked a little bit about it while you were away and I tried to steel man some of the case in terms of like what you're doing which is that your internet just went out. If you're on a helicopter I could imagine you go through lots of situations where you might be unable to ping the mother server just because of Wi-Fi outage or some reason. And I could see that being an absolute case where you'd want to have a local LLM that you could rely on.
Dan (45:00.325)
Yep. Yeah. And that's part of it. For the Home Lab reason. So yes, obviously in the helicopter, we run our own machine learning models. They're pretty dead simple though. It sees something, it recognizes it, it inventories, checks for problems and then builds a report. So it's a much simpler set of rules that we abide by when we're in the air. There are some things that I'm thinking about that'll make it learn even a little faster up in the air when it gets to situations it's not used to.
it'll sort of open its eyes and say, okay, maybe I can look at this in a different way. So I think I'll start to run some experiments in the air soon enough. But here at home, I don't know what I spend on the AI agents or the cloud code or whatever else I'm using $400. So it's very, it's tiny compared to my my payroll at the old company is very different from a $200,000 payroll.
Rex Kirshner (45:54.613)
Yeah.
Dan (45:58.449)
So, and it does replace quite a bit of that. It doesn't get me all the way, but also the sensitivity of some of the things that I work on, you know, I do want to be able to just pound away and have longer context windows. So, Quinn or Mistral or DeepSeek, depending on how you build it, and I'm not up on quantization yet. Like I don't quite understand how to make it fit all on.
one card. I do understand a little bit about it and you you're probably you're not running at like full horsepower when you when you build something locally but you are running much quicker and the context window is larger and I can just sort of hammer more I think. But also I'm sort of a DIY guy like I just want to get in early and see what it's like to build one while it's
expensive, it's not overly expensive. And I think 10 or 15 or $20,000 gets you a hell of a model to run. And then just sort of updating it when they drop new ones and it gets better and it gets quicker, it'll be easier. So I like to early adopt and then sort of fail fast.
Rex Kirshner (47:01.303)
We are.
Rex Kirshner (47:18.251)
Yeah, you know, I think, like what you're talking about is the law, not the logical conclusion, but like the, just like kind of like maximalist version of like building, you know, your own tool. So
Where I, when I like kind of set up today's conversation, I was thinking about like the internal tools that we're building for ourselves. You know, what I'm realizing through this conversation is maybe I'm doing something that's, whether or not it's helpful, it's like a little different, but of let's say like my nine projects that are running, like two of them for sure are just developer tools that I'm creating for myself. So like one was this like AI context system, which was like, how do I take as much as the context and
and put it into Markdown files so that it can be reviewed by other AI agents or it's available between sessions or any of that kind of stuff. And then I have this centralized admin tools tool that has everything from monitor, it has an activity monitor, it has a project monitor, it helps me upgrade my context system.
I put a huge amount of time and energy and tokens into developing these tools that, you know, I just had a moment this weekend where I just asked it, was like, is any of this doing anything at all? Like I know we spent four months like iterating on this AI context system. Every single time you tell me this is super smart and super helpful, but like, is it? And they're like, the honest truth, like these are good questions. The honest truth is like,
And that is kind of the theme that I wanted to talk through. I think, Dan, what you're talking about is the most is like...
Rex Kirshner (49:13.09)
What these tools really empower us to do is like kind of chase our like wildest dreams of being these like tinkers. And, it's like really, really cool. It makes us feel like we're doing something, but like, I'm not really sure it's moving the ball forward. And Dan, what you, you made one comment earlier that's sticking in my brain, which is like, you find yourself more and more these days, like actually going back to doing the work. you said this in regards to looking at your, your, context, like markdown files.
between compacts, but you know, I think what these things allow us to do is like focus less on the work and more on just like cool things that feel good. And like that a little bit worries me for just like what it's training in us. Is that resonated at all?
Dan (50:02.033)
Yeah, that does. it occurred to while you're, while you're describing that, it occurred to me like, Oh, what am I really trying to do? If I run it locally, what I'm really trying to do is save time and have like a complete contextual view of the thing that I'm building. if I have longer sessions and larger context windows, it feels like if I
build it locally and I run LLMs here, the LLM coder specifically to coding. I can basically run programs around that as well. So my library of MD files that I share with Claude or whatever every morning or at the beginning of every session or in every tablet, well, now all of sudden that's a machine that's only related, it's only focused on building that. So it can have sort of an open understanding of the longer term goal.
Rex Kirshner (50:59.725)
Mm-hmm.
Dan (50:59.917)
along with what I'm building every five hour session. So it just starts to, so instead of tinkering or having these ideas like you're describing, it's just one long context window. It just sort of opens it all up and I can leave it running the whole time in the data closet and they can just, I'm looking over at it now, like all the switches and all the machines in there, it can just run forever.
Rex Kirshner (51:04.225)
Yeah.
Rex Kirshner (51:15.34)
Mm-hmm.
Dan (51:29.329)
I think the earn back is the ROI is not there. Like whatever I'm spending a month with these tools, it's minimal to like probably getting this local LLM up. You know, it's way cheaper and way better to your point to something that's iterated so quickly by Anthropic.
Rex Kirshner (51:29.399)
Mm-hmm.
Rex Kirshner (51:34.157)
You
Rex Kirshner (51:43.158)
Mm-hmm.
Rex Kirshner (51:53.163)
Yeah, Garrett, I see you're unmuted.
Gerrit Hall (51:55.926)
Yeah, so I think that the answer I'd give to you is that the nice thing about this is that you know we're in a capitalist system so you can actually just put the scorecard directly up like it's not uncommon to make a $5,000 investment in expectation of profits of more than $5,000 in the future right like do you think that you Any of the projects you've been investing in are likely to make you $2,000 a year if so, then I'd say that you
have an easy argument that your cloud code has paid for itself.
Rex Kirshner (52:27.201)
Yeah, no, no, for sure. just, don't know if I have a easy argument that my AI context system is paying for itself, right? And like...
Of course you can take the amount, like amount of hours that I put into it and the $200 a month and like figure out the actual cost. But just there's other costs, right? Which is like the time and energy I'm putting in. And then on top of that, just the amount I'm weighing down every single one of my projects with what I suspect is just like completely useless. Like.
documentation and developer like tooling that like really only distracts Claude from coding as opposed, you know, like I worry that Claude empowers me so much to build whatever I want that I'm building things that are making Claude worse.
Dan (53:17.443)
It's an interesting way to look at it. No, I look at it as every day, even if it makes a bunch of wrong decisions or I drive it to make wrong decisions or I take too much time, I just learn something, you know? So I always sort of keep in mind like, I'm learning that, that's the wrong way to do it. And I'm wrong more than I'm right. On some of the decision trees, I don't go down with Claude, but I'm learning and it's certainly a skill like,
Rex Kirshner (53:31.393)
Mm-hmm.
Dan (53:47.343)
my kids need to have this skill. They're 11 and 14. And my son recently, Claude has insight into his homework and how he works. Like it has insight to his G drive, right? And he's struggling with this one concept in math. And I instructed Claude to be very empathetic and very compassionate and give him tips. after watching them, when you give a toddler an iPad, we didn't, but.
Rex Kirshner (53:57.985)
Mm-hmm.
Rex Kirshner (54:05.901)
You
Dan (54:14.329)
When you do, they sort of get it right away. The UXs, they understand how to do it. The same thing happened with Claude and he's used it before, but this last time he had a problem. He was upset. He was angry. He was frustrated. And by prompting Claude to be an empathetic teacher and then reviewing his homework and understanding the missing part, he just learned so, so much. Yeah.
Rex Kirshner (54:39.042)
Yeah.
Gerrit Hall (54:41.654)
So I guess I would say that we don't know exactly what the kind of passing the singularity is going to do for the global economy. Like there's all sorts of predictions ranging from like things are gonna be great to things are gonna be terrible. I'd say that looking at history, the one thing that we probably know for sure is the richer gonna get richer and the poor are gonna get poorer. Right, is that like a safe thing to say? And I would also go so far as to like just venture a guess that if you're not
Rex Kirshner (54:59.693)
Yeah.
Dan (55:02.9)
yeah.
Gerrit Hall (55:08.659)
willing to shell out $200 a month right now to play with Claude code and understand it, you're extremely more likely to be on the poor end of that. And I'm not guaranteeing that everyone who's shelling it out is going to make it to Valhalla, but I strongly suspect that the people who are strongly engaging with the front of the spear technologies like Claude code, really understanding them, are incredibly more likely to be on the right side of that dividing line when the singularity separates everybody.
Rex Kirshner (55:35.851)
Yeah, I mean, I definitely think like we're watching like capitalism fail in that like there's too much like the inequality is getting too bad and like it's so clear if you're using these tools like that is only going to accelerate. But I don't know. I don't know how to like be thinking that big right now. I guess you guys are right. Right. Like at the end of the day, these are all learning experiences. And I think what I learned is like any time
that I'm building something for Clodcode to do Clodcode better is probably a waste of my time. But like what that did lead to is I've now built, like I realized like I've built too many apps that I don't know if something happens. And I'm like, okay, do I wanna integrate like email, some stupid email service so that I have like 25 different apps that are all sending me emails? And...
You know, then that the realization is like, what if I created a new project? There was like a notification hub that all of my projects could send notifications to. could have that, like I create an iOS app and like get push notifications as well. And so, you know, I think my big learning for developing internal tools is like tools that wire your projects together and like make. Make think.
You know, I actually don't even know how to articulate it, but, I think they're like, Claude code is so amazing that there's a tendency, at least for me to be like, how do I make this even better? And like, that is dangerous. Like don't go down that route. Cause like, it'll always tell you how to make it better because like, as Dan says, like it's trained to make you happy. Yeah.
Dan (57:18.609)
Yeah, the book is Co-Intelligence by Ethan Malik, Living and Working with AI. came out a year or two ago and he's the one that wrote, Today is the worst day that our AI will ever be. It'll get better tomorrow. So he wrote that and a lot of people are quoting a fantastic book, like just extraordinary because you begin to understand like the very first AI machines and they were weird, man. The dudes that built that, they were building like
Rex Kirshner (57:31.019)
Yeah.
Dan (57:47.867)
something to mimic a 15 year old girl. You're like, why did you build that? That's super creepy. But he goes through like, here's the beginning, here's what it does now, and here's what it might do.
Rex Kirshner (57:53.101)
We'll get.
Rex Kirshner (57:59.703)
Yeah.
Gerrit Hall (58:00.051)
Yeah, Rex, guess if I were in your situation, I would probably approach it from the point of view of like, if you're a ruthless business owner and say, what is your KPI? Like what moves the needle? What gets you to $2,000 a year? I guess it would probably be something like sponsorship offers on your podcast, right?
Rex Kirshner (58:17.835)
Sure, mean, sure, yeah.
Gerrit Hall (58:20.63)
And then I would say, I kind of put everything through that filter of like, is whatever I'm typing into Claude to build right now actually going to get me more sponsorship offers on my podcast? Or whatever it is, right? Like if you have like a, you know, I know you have like some websites that you're doing to like, for donations for your nonprofit. Like is it, is what you're doing directly in that critical path of like getting more donations for the nonprofit or just whatever that like.
single task you think is. I think that would be the filtering device I would use to prioritize if something's worth my efforts or not.
Rex Kirshner (58:53.421)
Yeah, well, you it's interesting that you say that because like what I'm realizing is like that's totally not how I look at things, at least like at this stage, right? I'm still in the like, oh my God, like I'm 12 and just opened up my Christmas present. It's a Nintendo 64. And like, this is just cool, you know, and I'm so much more in the vein of like, what can I build? Like, how can I change just how I live? And like,
Gerrit Hall (59:09.109)
That's true, it is like that.
Rex Kirshner (59:20.045)
like how I operate and interact with my computers and like what my computer is capable of as opposed to like settling into that like, okay, now what can I, how can I like build companies or revenue around it? And Garrett, like I very much respect and like want to get to where you are. But I do think that there's like something kind of like, this is what Dan's talking about. Like he just.
You want the $20,000 box because it's cool, you know, and you can see it in his closet and like, I feel like I'm not ready to throw that impulse away yet.
Gerrit Hall (59:57.27)
no, I completely get it. Over Christmas especially, think we talked about how I already had just subscribed to Claude Max, and then that week I subscribed, they also gave me double the credits. So in addition to not being able to max it out and feeling like this kid on Christmas, I also was like, what else could I do? So I was going absolutely crazy. was like, which of the seven Millennium problems do you think you can make the most headway on in mathematics? OK, go ahead and solve the purchase winner in direct architecture.
Rex Kirshner (01:00:07.287)
Yeah, yeah.
Dan (01:00:07.982)
Yeah.
Rex Kirshner (01:00:21.549)
Yeah. Yeah. Dude, find myself Googling like most interesting data sets available just to see like, I could do something with that.
Dan (01:00:26.513)
Yeah. Yeah.
Gerrit Hall (01:00:27.189)
Obviously
Gerrit Hall (01:00:33.973)
So.
Dan (01:00:35.011)
Yeah, I think about what if it's gone? What if it disappears tomorrow? And like I don't have cloud or whatever to work on anymore. Like, man, I do not want to go back to that. It is making my life so much easier and I'm getting so much more done. And the entire goal of my company is less fires because of power lines. You know, we're keeping pilots safer. Like it's not altruistic, but I really want to like do.
Rex Kirshner (01:00:51.874)
Yeah.
Dan (01:01:04.421)
good in the world and it's enabling me to do something that I could have never done before in such a quick manner. Like, I need this thing built or I need this thing designed for the helicopter or this part built. You know, think about it with Claude and hey, what have you done? And then go into CAD and start building it or go into code and start writing it. And it's enabling me to do things I never thought possible.
Rex Kirshner (01:01:30.636)
Yeah.
No, I mean, think that's a good place to close this off. But, you know, that was like very optimistic and I'll put like a little bit of a different spin on that, which is I, I think about this all the time. Like what happens if these were gone tomorrow? Because like, whether we're talking about this AI bubble that I don't really know how to gauge, right. Or just the fact that they're so new, like these could easily be gone tomorrow. And like, I feel like my, my world would be less bright and like less exciting and like
That scares me. That is an addict's behavior. So, I don't know. We're not even starting the slide towards rock bottom yet, so we'll end with optimism. All right, thank you guys and see you next week. All right, don't go.
Dan (01:02:06.221)
yeah.
Dan (01:02:19.813)
Good place to end. Yeah.
See ya.
Gerrit Hall (01:02:24.704)
Thank you.