ALTERNATE UNIVERSE DEV

The Bike Shed

331: Git Down

Steph celebrates Utah's adoption day and Daylight Savings Time and troubleshoots a CI build time that had suddenly spiked for a client project using TeamCity. She also shares a minor update regarding the work that thoughtbot is doing to scale horizontally and add more machines quickly and efficiently to process more RSpec tests.

Chris was alarmed by logs and unknown-unknowns and had some fun using Git down. Git bless his heart!

This episode is brought to you by ScoutAPM. Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy.

Become a Sponsor of The Bike Shed!

Transcript:

CHRIS: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey.

STEPH: And I'm Steph Viccari.

CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world?

STEPH: Hey, Chris. Today is Utah's adoption day. So officially, one year ago, we adopted Utah. He's about a year and a half years old now because we got him when he was around the six-month mark. So Utah, aka Raptor, which is the nickname that you gave him, and aka UD [spelling] the cutie is his other nickname...which I've forgotten, why do you call him Raptor? Why is that a name?

CHRIS: Because there's a Utah Raptor.

STEPH: A person? [laughs]

CHRIS: No, I think it was like the fossils were found in Utah. But the Utah Raptor is a type of dinosaur. And so when I heard Utah, my brain went to Raptor, and then I dropped the Utah sort of a Cockney rhyming slang sort of thing. Shout out to Matt Sumner real quick. But yeah, Raptor.

STEPH: Cool. Cool. Cool. I'm so glad I asked. Now I know. I just accepted it when you called him Raptor. I was like, sure, he can be a Raptor. [laughs]

CHRIS: I feel like that says a lot about me that you were just like, okay, why not?

STEPH: [laughs]

CHRIS: That's different and has no apparent connection to the actual name of the creature, but that's fine. I might be a nonsense person.

STEPH: Or me for accepting it. You share a lot of nonsense, and I accept a lot of nonsense. That might be our dynamic. [laughs] So it works out.

CHRIS: That just may be our dynamic.

STEPH: That's why I'm always so nice with the good idea, bad idea, or even terrible. [laughs]

CHRIS: You're like, it's all nonsense 100% of the time, but yeah. So Utah is one year into living with you folks. So that's lovely.

STEPH: Yeah, and he's growing up so well. Oh, and I've been training him for one of his latest tricks. I'm very excited because it seems to be really sinking in. So every night, we take him out for his final bathroom potty but then before we go to bed. And one night, for some reason, I started singing The Final Countdown. [singing] It's the final countdown. But I started singing it's the final potty instead.

So now, when it's time to go out for the bathroom late at night, I look at him, and I start singing. And I start singing [vocalization], and it's working. He's starting to recognize that when I started singing that tune, he's like, okay, and he gets up from his comfy spot, and we go outside. And it brings me a lot of joy.

CHRIS: That is perhaps the best use of Pavlovian conditioning that I've ever heard of. Also, I really appreciate that you both mentioned the final countdown but then said just in case anyone is unfamiliar with the tune, let me hum a few bars. Thank you for doing the service there.

STEPH: I have been singing so much this week. I don't know if Joël Quenneville, who I've been pairing with a lot, appreciates that. Sorry, Joël. But I have been singing so much. And I think that's post-vacation vibes. That's what vacation does for you. And it helps you get back into, you know, lots of singing or at least it does for me.

Let's see, what else is going on this week? So this is the week that we have DST in the USA, so Daylight Savings Time, aka summertime, where we advance our clocks so everybody...although this is going to be late. So at this point, by the time people are hearing this, you're going to have already dealt with all those bugs that have crept up. But those are creeping up this week, where people are starting to notice a lot of those flaky specs that aren't technically flaky. They're actually breaking for real reasons because they were tested in a way that shows that they're not considering that daytime boundary.

CHRIS: It's as if you spend some of your time fixing flaky specs that that's where your mind goes with DST. Because I'm going, to be honest, part of what you're doing right now is telling me that this is coming up, and I didn't know. I had forgotten about that, which is very exciting, except you lose an hour asleep for this one, right? Or is it that you gain?

STEPH: We're going forward. Yeah, it's fall back and then spring forward. That's how I remember it.

CHRIS: Worth it. I'll take the sunshine at night.

STEPH: Yeah, it's supposed to be so we have more sunshine during the daylight hours. That's the reasoning for the nonsense, the headaches. On some more technical news, when I came back from vacation, we noticed that the CI build time has suddenly spiked for the client project where previously we were averaging, I'd say, around 25-26 minutes. There's definitely a range there. But that seems to be pretty consistent.

And right now, builds are taking more about 35, sometimes upwards to 45 minutes. And so it's been a bit of who done it or what caused it adventure of figuring out why, what's causing the spike. And so Joël and I have been pairing heavily on that to investigate what's going on and learned a lot of features that TeamCity offers and just diving into this particular issue.

One thing that brought me joy is by looking through all the builds that are taking place on TeamCity. As I noticed, there are a number of builds that are using the RSpec selective testing that I added where if you only change a test to then we're only going to run those tests instead of the whole suite. And it was one of those changes where I thought, okay, maybe someone's going to get use out of this. Joël and I will probably get use out of this. But I'm actually seeing it about one every ten build something like that. And I'm just like, oh, this is awesome.

One, people are improving tests. That's amazing. And then two, that then they're benefiting especially while we have this spike going on. So that was a suggestion from you that I appreciate because that is paying dividends. And so that brought me a lot of joy while looking into this other issue, which we haven't resolved yet. We think it has something to do with how the tests are being balanced across all the different parallelized processes. And we think that there is an imbalance that has happened. And then that's what's really throwing things off.

So we can see that one particular process is taking around 26-27 minutes, but then the next process that's highest in time is only taking 17 minutes. So it's like, why is there suddenly ten more minutes that's being attributed to one process? And why is that not getting spread out? So still looking into that. That's the mystery for this week. But that's mostly what's going on in my world. What's up in your world?

CHRIS: What is up in my world? I'm going to say a quite alarming thing happened this week, which was we were investigating some changes, or we were investigating some behavior where the particular portion of the system ended up in the logs, just sort of combing through. And I happened to notice this one log line that...our logs tend to be somewhat verbose. They're JSON-structured log format. I've talked about the lograge setup that we use in the past, but there's a bunch. These are long lines of JSON-structured data.

But this line that caught my eye was not. It was just some text, and it said, "Unreported event: and then some other texts." And I was like, ah, what? Who didn't report which to when? I did some digging, eventually figured out that this was Sentry. Sentry was logging that it had not reported an event to us. But had we not randomly happened upon this in the logs, which is sort of a random thing to see, we would have missed this, which is scary. I mean, it was missed for a little while. And so Sentry was not reporting certain events.

We had made a change, particularly to Sentry's before_send configuration. So there's a way that you can do some amount of filtering client-side or client being, in this case, our Ruby app. So that's like the client-side of Sentry, and then there's their server backend. So that would, weirdly, that's the way the client-server work in this case.

But the idea is you can do some proactive filtering of being like, you know what? Rather than sending a ton of noise...because we know there's this one error that we can't stop for reasons. It's a JavaScript Chrome extension that's getting embedded in the app. That doesn't mean anything; that's just noise. Rather than even sending those over to Sentry, let's proactively filter them out. before_send is a function within the Sentry SDK that allows you to do this.

But it turns out if you raise an error in there, if you happen to have introduced something that doesn't cover all the possible edge cases, then Sentry will just not let you know and will log, interestingly, that they did not report the event. I'm going to throw it out there that I would love if Sentry were to say Sentry me...that's where I put something very bad happened, and you should look at it.

And they're just like, well, something pretty darn bad happened. We'll log it. Supposedly, my understanding is before_send can be used to filter out like PII or other things like that. And so their failure mode is quiet intentionally. That's my understanding as to maybe why this is true. I wish there were configuration that said, no, please fail as loudly as humanly possible. But that was terrifying.

STEPH: Yeah, absolutely. I'm going to piggyback on what you just said for a minute because I was also thinking earlier and related to the sudden spike in our CI builds where I was like, it would be really nice if there's...because I suspect there's one particular change that has caused this to happen. I don't know what it is yet, but that's just my suspicion. And it would be great if when that build ran, let's say that build went from an average of 25 minutes and suddenly we have a build that took 35 minutes if TeamCity had alerted us or if something more aggressive had to happen to say like, "Hey, your team..." or maybe it's just in the logs somewhere.

Okay, not in the logs somewhere more visible on the build where it's like, "Hey, your build took an extra 10 minutes compared to the average, just letting you know. I don't have a diagnosis for you, but we're just letting you know." So yeah, plus-one to getting those types of alerts out to people and notifying us when there's an average that's not being met or when things aren't getting logged like you'd expect them to.

CHRIS: As part of what we were doing in the logs...like how to get to that anomaly detection place is a really interesting question in my mind. And this is a case where we were in the logs, and we wanted to instrument more things. So we have a bunch of stuff right now that goes in. It's either a warn or error log level. And the error should be pretty rare because, ideally, those are going to Sentry instead, but we still want to keep an eye on them.

But we introduced a new search within log entries, which is what we're using for logging aggregation and searching. And the idea was to group all warn-level messages and to group it on the message string. So ideally, what this allows us to do is say, "Oh, we've seen 200 instances in the past two days of this new warning that we didn't see before." The difficulty is, as a human, I would see unhandled error blah as one bucket of warning, or I might want to see it that way. I might want to group it on part of the message. So it becomes really hard to find the signal in the noise on these, but at least it was a start.

We now have this little graph for both warning and error-level log messages that we can see are there any new anomalies that are occurring pretty regularly? But this, again, was just this weird edge case where we were lucky to catch it. But it was very scary that it was just throwing stuff away. So the universe might have been true that our error log did get a little quiet for a little while, which was nice, but it wasn't 100%. It wasn't like we were at 10 hours, an hour, and then we went to zero. It was like some, and then we went to a lower number because we were still getting some. We were only filtering out certain ones.

But yeah, it's how do you know at runtime that the system is doing the thing? This is increasingly the question that I have in my mind. But yeah, so that was the thing. We fixed it. It's fixed now. I also set up an alert in log entries to say, "If you ever see this particular phrase again unhandled or unreported," then please tell me about that post-haste. So we've got that now.

STEPH: That's perfect. That's what I was about to ask us if there's a way that you could add a filter or add a warning for that anomaly detection. So that sounds great.

CHRIS: I've got that now because this became a known-unknown, but there are still the unknown-unknowns, and there are so many of them. And I can't know them is my understanding of how they work. I would love to know them. I would love to pin them down and be like, "Hey, what are you doing here?" Someday maybe. But anyway, that was the thing in my world. [laughs] It was fun. It was a great little time. What else is up in your world?

STEPH: I feel like you can always judge the level of fun based on how high someone's voice goes. No, it was fun. It was great. It was fun.

[laughter]

CHRIS: I believe that is an accurate assessment, yes.

STEPH: I've caught myself doing that. I'm like, my voice is extra high, so I don't think I really mean that when I'm using the word fun. [laughs]

Mid-roll Ad

Hi, friends, and now a quick break to hear from today's sponsor, Scout APM.

Scout APM is an application performance monitoring tool that's designed to help developers find and fix performance issues quickly. With an intuitive user interface, Scout will tie bottlenecks to source code, so you can quickly pinpoint and resolve performance abnormalities like N+1 queries, slow database queries, and memory bloat.

Scout also recently implemented external service monitoring, adding even more granularity when it comes to HTTP requests and API calls. So give Scout a try today with a free 14-day trial and experience first-hand why developers worldwide call Scout their best friend.

And as an added bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. To learn more, visit scoutapm.com/bikeshed. That's scoutapm.com/bikeshed.

I do have a small update that I can share regarding the work that we're doing to be able to scale horizontally. So we want to be able to add more machines quickly and easily so we can then process more RSpec tests. And we have discovered with TeamCity that we're pushing forward on that particular path because they have something called a composite build. And with a composite build, it's essentially your parent or your supervisor build. And then, from there, you can create other subsequent builds.

So we can then say, all right, let's have multiple builds that then run the RSpec test, and then we can separate in that way. And right now, we're going about it in the hacky way because we just want a proof of concept. So we are saying specifically in this particular step, we want you to run spec models. And in this other process, we want you to run these particular tests just because we want to see how this works. And so far, the aggregation seemed great.

So when you look at that composite parent build, it's showing you how each of those builds are doing. It's also reporting back the failures. It's even de-duping them. Because initially, we set it up where we were running the full test suite in parallel on both of these builds, [laughs] not what we wanted, fixed that. But it did highlight that it was de-duping the test failures. So that part was nice.

So the UI seems great and seems quite very capable of doing this. Composite build seems to be the way that we can do this with TeamCity. But we're still diving into actually getting the metrics like, okay, how much is this actually going to speed us up? And what does this look like if we want to be able to scale up to say from 5 to 10 where we went from 5 machines to 10 machines? And that part doesn't feel graceful because then you have to go in and change the configuration and copy the configuration to then add a new build that then is going to process RSpec test.

So other services like Buildkite make it very easy. I can't remember if it's like literally a slider or if it's a number that you enter. But you can say, "This is how many processes that I want to run," in which it would be a lot nicer for that actual scaling. Versus TeamCity, it feels far more manual and intentional where you then have to duplicate and add those settings. But it's a really good first step because, as we'd highlighted before, there's a lot of risk in moving over from an existing infrastructure to something totally new.

So if we can have some wins with this approach and help out the team and reduce build time, then that gives us more grace period. So then we can assess, okay, do we really want to move over to Buildkite? What do we want to do next? What does this look like? And have further discussions. So that's a small update there. Next time I should have some more updates around actual data on how things are looking.

CHRIS: Oh, cool. Yeah, I appreciate the update and definitely interested to hear how this continues to play out. This is a large project that you're undertaking and all the facets and whatnot, so yeah, super interested to hear the continued journey of the test build time reduction. Let's see, other news in my world. I've been exploring something that I'm intrigued by the idea. Let's go with that. [chuckles] That's going to be my start. I always start with these lead-ins that build things up too much.

But I am finding a small tension in trying to just keep up with what the team is doing, which is a wonderful place to be. Our team is growing. We actually have someone new joining tomorrow, very exciting. But I'm trying to find the right version of I don't want to block things. I don't want all code review to have to go through me. But I do want to keep an eye on everything. I want to kind of know what we're doing collectively.

And ideally, mostly, that's me being like, yep, that makes sense. We're doing that. I remember that, cool. Wait, what's this? And rarely, occasionally, there'll be a point where I'm like, oh, I want to intervene here. I want to have a conversation. I want to rethink how we're building this. And so it's moving from a place of any sort of blocking synchronous review or the necessity for that to ad hoc post-review sort of thing. And so the way that I'm trying to poke around with this, of course, I'm writing some code to do it because of me.

So the two systems that we're using that seem most of interest are GitHub and Trello. And so it turns out GitHub has a wonderful search, and I can create a search that is parameterized like create a URL that jumps into a parameterized search saying, "Show me everything that was merged in the past X amount of time, " so I can say the past two days because I haven't checked it in two days. So I'll see all of the PRs that were merged, and some of them I'll have already reviewed. So I maybe could even filter further there. But for anything that I haven't seen, I'm like, oh, what was this? What was that? What was this other change?

Similarly, on Trello, there's a way via the API to get all of the card update actions. And then I can filter down to say whenever a card was moved, which in our system that means...we're doing Kanban-style, so a card being moved from this column to that column that tells me that someone is progressing forward with some work. And then I can further filter down because, again, I don't really want to be blocking on this. I'm most interested in what have we done or completed in the most recent timeframe.

And thus far, it's an interesting data set. And it's an interesting way to switch the problem around such that I'm not feeling...there was FOMO or organizational FOMO is perhaps how I would describe it of like, I want to try and keep an eye on stuff and make sure I'm responsive. But I'm now blocking, so I have to step away. But now I'm worried that I'm missing things. And so I'm trying to find that good middle spot. And this feels like an interesting exploration of that.

STEPH: I'm intrigued when you mentioned the card moving over, so then you can tell things are progressing. And then you're answering the question of what did we do in this particular chunk of time? When you move stuff over, is there a clear sweep of we have finished this sprint, and then you have the date of that sprint at the top, and so then you essentially have a column that represents all the work that was done in that sprint? Is that an approach that you're using? Because that's the one that immediately came to mind for me when you're wondering what was accomplished during this week or two-week period?

CHRIS: Interesting question. So we're not really doing sprints, or there are no real iterations. We're doing more of the I think Kanban is the way to describe it. But basically, we have a prioritized next up column. And then every day, I can say continuously, the work has the same shape, which is pick up the next most important thing, work on it, move it through the various columns.

I did introduce in Trello just the idea of, like, here's a month, so we can see by month what we're doing, but that's too low granularity in my mind. I want to review it a month at a time. The whole point of this in my mind is to see stuff as it's happening vaguely in real-time but not requiring me to constantly be monitoring everything. So it gives me an opportunity at the end of the day to be like, what happened today? What do we do? But yeah, so there's no real sprint that I would couple this to because we're not really doing sprints.

STEPH: Got it. Yeah, that gives me more context. I understand why you're then looking for ways as to how to answer that question of, like, what did we accomplish in this week or a particular time period?

CHRIS: And to name it, this is not an intention on my part to be like, I need to control everything. I need to make all the decisions. I very much want to empower the team. And in my mind, this is actually a mechanism to empower the team. I want to give them more freedom and then have the opportunity occasionally to check back in and be like, oh, actually, there was some context that was missing here the way we did this. Let's actually unwind that, do it this other way for these reasons. But it gives me the ability to potentially have that conversation after the fact.

We're trying very hard to have the tickets be as representative and complete, and well documented as possible. But that's very difficult to get to. And there are also things that I don't even know to mention. Again, I think the critical bit is this is not an attempt to make sure everything aligns with what I think; it's more I want to empower the team to move without me most of the time. And then, where there are things that potentially should have a small conversation or a redirection, then we have the ability to do that. And so, I'm trying to build that back into my workflow while basically loosening up my connection to the work in progress at any given point in time.

STEPH: So you just touched on a topic that's really interesting to me or a particular space. You're doing a very kind thing where you want tickets to have lots of context so that people feel confident when they're picking up what's the action item to be done. And for someone that's new, that's incredibly helpful, and I think more important since they are new to that world.

But in general, my spicy take of the moment is going to be as developers; that's part of our job. If we notice that context is missing or if we're not clear about the action item, is to think through what is it that I'm missing? Who do I reach out to? Who can I go to for help? How can I scope this work? All of that, to me, is very much part of our role.

And the idea that tickets always have to be perfectly curated, which I don't think you're saying, but you're just trying to be extra helpful. But if someone were to have that expectation, I think that expectation is wrong. And I do think it is part of our work that then we help make sure that tickets are well-scoped and well-defined and have those conversations with the people creating the tickets or creating them ourselves.

CHRIS: I love the clarification there, and I'm definitely in agreement with you. I don't know how picante of a take it is. I would be intrigued. Listeners, let us know. Are we breaking your mold of what things should be? But I do like the idea that it is a conversation so back and forth. And so the idea that as developers, there should just be this very clear list of things to do and you just kind of pick up a card and heads down, just get it done, I don't think that should be the mold.

But I do think; ideally, the why is the most important thing that I think should be in a card. So ideally, a card should have little in terms of technical implementation notes and should have more in terms of here's the goal that we're going for, here's the problem, or here's the thing that we're trying to solve. And then maybe a suggestion of like, I think it could be an X, Y, and Z, but I'm not sure. Or we want to be able to send transactional emails, but I don't know any more than that.

Our goal is to engage users. Like that last sentence, that last little bit of our goal is to engage users is a critical, critical data point, versus our goal is to solve for a regulatory and compliance issue. It's like, well, those are different. And they will lead to different solutions and different implementations and all that.

So yeah, I definitely share the idea that cards don't need to be perfectly specified. And if anything, I think I'm closer to that than it probably sounded like I was. But for that reason, it's totally possible in my mind, that work will be done in a way that after the fact, I'm like, "Oh, sorry, there was a misunderstanding here. Let's revisit this work."

And so, my goal is to try and stay connected and have a feedback mechanism at the end of the process. So when the work is done, be able to spot-check it rather than trying to have to watch it as it's happening or proactively define everything in excruciating detail such that exactly the right things happen all the time. So I'm moving to a place of ask forgiveness, not permission. That's the wrong analogy here. But that idea of like, we can clean it up after the fact, that's fine. And we don't need to try and prevent any sort of things, or at least that's what I'm exploring.

STEPH: Yeah, I love that you highlighted having the why. I adore that when that's on a card just because I then I want to know the goal because then that's going to help me ask questions and think about scoping versus if it's like a very specific implementation, then I feel so narrowly scoped that I don't feel as confident that I can be like, okay, I know why I'm doing this versus I just feel very directed to do a thing, and that's incredibly helpful.

I have also felt the pain that you're mentioning where it does feel like a ticket has all of the work clearly defined, and the goals, and the whys, and it can have everything there, but just something gets lost in the communication. And so someone implements something in a way that is how they interpreted the work versus it's not actually what the ticket or what the goal of the work was to be done.

So I appreciate that where you are looking for ways to tweak things to make sure that whoever is picking up that ticket will have the same interpretation that the author intended for them to have. And then if that does happen, and things get misaligned, then you chat and figure out ways to improve it.

I think that's the point that I was really thinking about, and my air quotes, "hot take," is that as developers, a big part of our job is communication, and then also sharing the knowledge that we have with other people. And so if someone is expecting that they can just always pick up work and never talk to someone, I don't know, maybe you're in the wrong business. [laughs] That's my hot take.

CHRIS: I, for one, like the hot take. It is nice and ever so slightly spicy.

STEPH: Thanks. Yeah, I just think communication is incredibly important. Earlier, you mentioned, I don't think we were on mic at the moment, but you mentioned something about a new Git alias. And I am very intrigued on hearing about what you've added, what it does, all the details.

CHRIS: All the details, that's probably too many, but some of the details I can certainly provide. So I have two new Git aliases; one is Git gone, which is probably the heart of the whole thing. And so the background of this is I found myself pushing the green merge button on GitHub more. We've introduced some branch protection stuff, which I've talked about in previous episodes. And I dream of the day that one of my good, good friends at GitHub will give me access to the merge queue beta. Please, please, I implore thee. But in the interim, still clicking the green merge button more often than not.

STEPH: Wait. I have to ask to help you in this dream. Are you forwarding these episodes to someone? You can just take a clip of you saying, "Please, please, please give me access," [chuckles] and just forwarding that or mentioning someone at GitHub or GitHub in general.

CHRIS: Just leaving voicemails for people with a Bike Shed section of me begging for access to the merge queue beta?

STEPH: Yeah. [laughs]

CHRIS: No, I'm not. But maybe I need to up my game. You're right. [laughs] Someday, I'll get there. And that will only exacerbate this issue that I'm feeling, which is again, I'm clicking the merge button. That's what's happening. And as a result, that means my local branch is now like it's done its job. You've served me well. And in the Marie Kondo sense, I need to hold you up, thank you for your service, and then let you go. But I obviously wanted to automate that.

So Git gone does that automation, and it was fun. So I found a blog post which we'll include in the show notes, that had most of the pieces here, but it was still fun to play with the shell pipeline in a way that I hadn't in a while. So it does a Git fetch and then git-for-each-ref with a particular structured format that references the upstream of the branch then uses awk to search for the word gone.

Because Git, if you print it out in this particular way using this format, it will say the local branch name and then the upstream. But if you've deleted the upstream, it will specifically say (gone) in brackets, so you can actually use that to filter them down. And then I pipe that to git branch-D so..well, xargs of course. I love a little shell pipeline. As an aside, these are fun little things to build up. So that is Git gone.

And then the other one that I have is Git down, which is what I use more. And Git down works on top of Git gone, so it's Git checkout main and and Git pull and and Git gone. But that means I get to type Git down into my terminal whenever a branch happens to get merged in the upstream land. [laughs]

STEPH: [laughs] Oh, that's adorable. I love it. I like the Git gone, and yeah, I like the Git down just for fun. You are inspiring me where I now really want a Git bless your heart that's like maybe a Git blame or a Git revert. [laughs]

CHRIS: I've definitely seen people do Git praise as an alias for Git blame.

STEPH: That's nice.

CHRIS: But Git bless your heart is...ooh, I love that.

STEPH: [laughs] I might have to add that just so I can type it, and then someone can say, "What are you doing?" [laughs] Cool, I love it.

CHRIS: Little things, little fun bits to add to your day and to automate and have a little fun while you're at it. So that's where I'm at.

STEPH: All about the communication and fun. That's what I'm here for and the singing. Let's not forget the singing.

CHRIS: And the singing, of course.

STEPH: [singing] On that note, shall we wrap up?

CHRIS: Let's shall. Oh.

STEPH: [laughs]

CHRIS: The show notes for this episode can be found at bikeshed.fm.

STEPH: This show is produced and edited by Mandy Moore.

CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show.

STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.

CHRIS: And I'm @christoomey.

STEPH: Or you can reach us at hosts@bikeshed.fm via email.

CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.

ALL: Bye.

ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.

Sponsored By:

Support The Bike Shed

Episode source