ALTERNATE UNIVERSE DEV

The Bike Shed

347: Tracking Velocity

Chris talks about a small toy app he maintains on the side and working with a project called capybara_table. Steph is getting ready for maternity leave and wonders how you track velocity and know if you're working quickly enough?

They answer a listener's question about where to get started testing a legacy app.

This episode is brought to you by Airbrake. Visit Frictionless error monitoring and performance insight for your app stack.

jnicklas/capybara_table: Capybara selectors and matchers for working with HTML tables

Become a Sponsor of The Bike Shed!

Transcript:

CHRIS: Just gotta hold on. Fly this thing straight to the crash site.

STEPH: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey.

CHRIS: And I'm Steph Viccari.

STEPH: And together, we're here to share a bit of what we've learned along the way. I love that you rolled with that. [laughs]

CHRIS: No, actually, it was the only thing I could do. I [laughs] was frozen into action is a weird way to describe it, but there we are.

STEPH: I mentioned to you a while back that I've always wanted to do that. Today was the day. It happened.

CHRIS: Today was the day. It wasn't even that long ago that you told me. I feel like you could have waited another week or two. I feel like maybe I was too prepared. But yeah, for anyone listening, you may be surprised to find out that I am not, in fact, Steph Viccari.

STEPH: And they'll be surprised to find out that I actually am Chris Toomey. This is just a solo monologue. And you've done a great job of two voices [laughs] this whole time and been tricking everybody.

CHRIS: It has been a struggle. But I'm glad to now get the proper recognition for the fact that I have actually [laughs] been both sides of this thing the whole time.

STEPH: It's been a very impressive talent in how you've run both sides of the conversation. Well, on that note, [laughs] switching gears just a bit, what's new in your world?

CHRIS: What's new in my world? Answering now as Chris Toomey. Let's see; I got two small updates, one a very positive update, one a less positive update. As is the correct order, I'm going to lead with the less positive thing. So I have a small toy app that I maintain on the side. I used to have a bunch of these little purpose-built singular apps, typically Rails app sort of things where I would play with a new technology, but it was some sort of like, oh, it's a tracker. It's a counter. We talked about breakable toys in the past. These were those, for me, serve different purposes, productivity things, or whatever.

But at some point, I was like, this is too much work, so I consolidated them all. And I kept like, there was a handful of features that I liked, smashed them all together into one Rails app that I maintain. And that's just like my Rails app. It turns out it's useful to be able to program the internet. So I was like, cool, I'll do that for myself. I have this little app that I maintain.

It's got like a journal in it and other things. I think I've talked about the journal in the past. But I don't actually take that good care of it. I haven't added any features in a while. It mostly just does what it's supposed to, but it had...entropy had gotten the better of it.

And so, I had a very small feature that I wanted to add. It was actually just a Rake task that should run in the background on a schedule. And if something is out of order, then it should send me an email. Basically, just an update of like, you need to do something. It seemed like such a simple task. And then, oh goodness, the failure modes that I fell into.

First, I was on Heroku-18. Heroku is currently on their Heroku-22 stack. 18 being the year, so it was like 2018, and then there's a 2020 stack, and then the 2022. That's the current one. So I was two stacks behind, and they were yelling at me about that. So I was like, okay, but whatever. Can I ignore that for a little while?

Turns out no, because I couldn't even get the app to boot locally, something about some gems or some I think Webpacker was broken locally. So I was trying to fix things, finally got that to work. But then I couldn't get it to build on CircleCI because Node needed Python, Python 2 specifically, not Python 3, in order to build Node dependencies, particularly LibSass, I want to say, or node-sass.

So node-sass needed Python 2, which I believe is end of life-d, to build a CSS authoring tool. And I kind of took a step back at that moment, and I was like, what did we do, everybody? What is going on here? And thankfully, I feel like there was more sort of unification of tools and simplification of the build tool space and whatnot. But I patched it, and I fixed some things, then finally I got it working. But then Memcache wasn't working, and I had to de-provision that and reprovision something.

The amount of little...like, each thing that I fixed broke something else. I was like, the only thing I can do at this point is just burn the entire app down and rebuild it. Thankfully, I found a working version of things. But I think at some point, I've got to roll up my sleeves some weekend and do the full Rails, Ruby, everything upgrade, just get back to fresh. But my goodness, it was rough.

STEPH: I feel like this is one of those reasons where we've talked in the past about you want to do something, and you keep putting it off. And it's like, if I had just sat down and done it, I could have knocked it out. Like, oh, it only took me like 5-10 minutes.

But then there's this where you get excited, and then you want to dive in. And then suddenly, you do spend an hour or however long, and you're just focused on trying to get to the point where you can break ground and start building. I think that's the resistance that we're often fighting when we think about, oh, I'm going to keep delaying this because I don't know how long it's going to take.

CHRIS: There's something that I see in certain programming communities, which is sort of a beginner-friendliness or a beginner's mindset or a welcomingness to beginners. I see it, particularly in the Svelte world, where they have a strong focus on being able to pick something up and run with it immediately. The entire tutorial is built as there's the tutorial on the one side, like the text, and then on the right side is an interactive REPL. And you're just playing with the Svelte REPL and poking around. And it's so tangible and immediate.

And they're working on a similar thing now for SvelteKit, which is the meta-framework that does server-side rendering and all the fancy stuff. But I love the idea that that is so core to how the Svelte community works. And I'll be honest that other times, I've looked at it, and I've been like, I don't care as much about the first run experience; I care much more about the long-term maintainability of something.

But it turns out that I think those two are more coupled than I had initially...like, how easy is it for a beginner to get started is closely related to or is, you know, the flip side of how easy is it for me to maintain that over time, to find the documentation, to not have a weird builder that no one else has ever seen.

There's that wonderful XKCD where it's like, what's the saddest thing on the internet? Seeing the question that you have asked by one other person on Stack Overflow and no answers four years ago. It's like, yeah, that's painful. You actually want to be part of the boring, mundane, everybody's getting the same errors, and we have solutions to them. So I really appreciate when frameworks and communities care a lot about both that first run experience but also the maintainability, the error messages, the how okay is it for this system to segfault? Because it turns out segfaults prints some funny characters to your terminal.

And so, like the range from human-friendly error message all the way through to binary character dump, I'm interested in folks that care about that space. But yeah, so that's just a bit of griping. I got through it. I made things work. I appreciate, again, the efforts that people are putting in to make that sad situation that I experienced not as common.

But to highlight something that's really great and wonderful that I've been working with, there is a project called capybara_table. capybara_table is the gem name. And it is just this delightful little set of matchers that you can use within a Capybara, particularly within feature spec. So if you have a table, you can now make an assertion that's like, expect the table to have table row. And then you can basically pass it a hash of the column name and the value, but you can pass it any of the columns that you want. And you can pass it...basically, it reads exactly like the user would read it.

And then, if there's an error, if it actually doesn't find it, if it misses the assertion, it will actually print out a little ASCII table for you, which is so nice. It's like, here's the table row that I saw. It didn't have what you were looking for, friend, sorry about that. And it's just so expressive. It forces accessibility because it basically looks at the semantic structure of a table. And if your table is not properly semantically structured, if you're not using TDs and TRs, and all that kind of stuff, then it will not find it.

And so it's another one of those cases where testing can be a really useful constraint from the usability and accessibility of your application. And so, just in every way, I found this project works so well. Error messages are great. It forces you into a better way of building applications. It is just a wonderful little tool that I found.

STEPH: That's awesome. I've definitely seen other thoughtboters when working in codebases that then they'll add really nice helper methods around that for like checking does this data exist in the table? And so I'm used to seeing that type of approach or taking that type of approach myself. But the ASCII table printout is lovely. That's so...yeah, that's just a nice cherry on top. I will have to lock that one away and use that in the future.

CHRIS: Yeah, really, just such a delightful thing. And again, in contrast to the troubles of my weekend, it was very nice to have this one tool that was just like, oh, here's an error, and it's so easy to follow, and yeah. So it's good that there are good things in the world. But speaking of good things, what's new in your world? I hope good things. And I hope you're not about to be like, everything's terrible. But what's up with you?

[laughter]

STEPH: Everything's on fire. No, I do have some good things. So the good thing is that I'm preparing for...I have maternity leave that's coming up. So I am going to take maternity leave in about four-ish weeks. I know the date, but I'm saying the ish because I don't know when people are listening. [laughs] So I'm taking maternity leave coming up soon. I'm very excited, a little panicked mostly about baby preparedness, because, oh my goodness, it is such an overwhelming world, and what everyone thinks you should or shouldn't have and things that you need to do.

So I've been ramping up heavily in that area. And then also planning for when I'm gone and then what that's going to look like for the team, and for clients, and for making sure I've got work wrapped up nicely. So that's a big project. It's just something that's on my mind, something that I am working through and making plans for.

On the weird side, I ran into something because I'm still in test migration world. That is one of like, this is my mountain. This is my Everest. I am determined to get all of these tests. Thank you to everyone who has listened to me, especially you, listen to me talk about this test migration path I've been on and the journey that it's been. This is the goal that I have in mind that I really want to get done.

CHRIS: I know that when you said, "Especially you," you were talking to me, Chris Toomey. But I want to imagine that every listener out there is just like, aww, you're welcome, Steph. So I'm going to pretend for my own sake that that's what you meant by, especially you. It's especially every one of you out there in the audience.

STEPH: Yes, I love either version. And good point, because you're right, I'm looking at you. So I can say especially you since you've been on this journey with me, but everybody listening has been on this journey with me. So I've got a number of files left that I'm working through. And one of the funky things that I ran into, well, it's really not funky; it was a little bit more of an educational rabbit hole for me because it's something that I hadn't considered.

So migrating over a controller test over from Test::Unit to then RSpec, there are a number of controller tests that issue requests or they call the same controller method multiple times. And at first, I didn't think too much about it. I was like, okay, well, I'm just going to move this over to RSpec, and everything is going to be fine.

But based on the way a lot of the information is getting set around logging in a user and then performing an action, and then trying to log in a different user, and then perform another action that was causing mayhem. Because then the second user was never getting logged in because the first user wasn't getting logged out. And it was causing enough problems that Joël and I both sat back, and we're like, this should really be a request back because that way, we're going through the full Rails routing. We're going through more of the sessions that get set, and then we can emulate that full request and response cycle.

And that was something that I just hadn't, I guess, I hadn't done before. I've never written a controller spec where then I was making multiple calls. And so it took a little while for me to realize, like, oh, yeah, controller specs are really just unit test. And they're not going to emulate, give us the full lifecycle that a request spec does.

And it's something that I've always known, but I've never actually felt that pain point to then push me over to like, hey, move this to a request spec. So that was kind of a nice reminder to go through to be like, this is why we have controller specs. You can unit test a specific action; it is just hitting that controller method. And then, if you want to do something that simulates more of a user flow, then go ahead and move over to the request spec land.

CHRIS: I don't know what the current status is, but am I remembering correctly that the controller specs aren't really a thing anymore and that you're supposed to just use request specs? And then there's features specs. I feel like I'm conflating...there's like controller requests and feature, but feature maybe doesn't...no, system, that's what I'm thinking of. So request specs, I think, are supposed to be the way that you do controller-like things anymore. And the true controller spec unit level thing doesn't exist anymore. It can still be done but isn't recommended or common. Does that sound true to you, or am I making stuff up?

STEPH: No, that sounds true to me. So I think controller specs are something that you can still do and still access. But they are very much at that unit layer focus of a test versus request specs are now more encouraged. Request specs have also been around for a while, but they used to be incredibly slow.

I think it was more around Rails 5 that then they received a big increase in performance. And so that's when RSpec and Rails were like, hey, we've improved request specs. They test more of the framework. So if you're going to test these actions, we recommend going for request specs, but controller specs are still there. I think for smaller things that you may want to test, like perhaps you want to test that an endpoint returns a particular status that shows that you're not authorized or forbidden, something that's very specific, I think I would still reach for a controller spec in that case.

CHRIS: I feel like I have that slight inclination to the unit spec level thing. But I've been caught enough by different things. Like, there was a case where CSRF wasn't working. Like, we made some switch in the application, and suddenly CSRF was broken, and I was like, well, that's bad. And the request spec would have caught it, but the controller spec wouldn't. And there's lots of the middleware stack and all of the before actions.

There is so much hidden complexity in there that I think I'm increasingly of the opinion, although I was definitely resistant to it at first, but like, yeah, maybe just go the request spec route and just like, sure. And they'll be a little more costly, but I think it's worth that trade-off because it's the stuff that you're not thinking about that is probably the stuff that you're going to break. It's not the stuff that you're like, definitely, if true, then do that. Like, that's the easier stuff to get right. But it's the sneaky stuff that you want your tests to tell you when you did something wrong. And that's where they're going to sneak in.

STEPH: I agree. And yeah, by going with the request specs, then you're really leaning into more of an integration test since you are testing more of that request/response lifecycle, and you're not as likely to get caught up on the sneaky stuff that you mentioned. So yeah, overall, it was just one of those nice reminders of I know I use request specs. I know there's a reason that I favor them. But it was one of those like; this is why we lean into request specs. And here's a really good use case of where something had been finagled to work as a controller test but really rightfully lived in more of an integration request spec.

MIDROLL AD:

Debugging errors can be a developer’s worst nightmare...but it doesn’t have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help you cut your debugging time in half.

So why do developers love Airbrake? Well, it has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking!

Airbrake’s debugging tool catches all your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted.

In addition to stellar error monitoring, Airbrake’s lightweight APM enables developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction.

Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality.

Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps and includes modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back.

Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. So head on over to airbrake.io/try/bikeshed to create your FREE developer account today!

STEPH: Changing gears just a bit, I have something that I'd love to chat with you about. It came up while I was having a conversation with another thoughtboter as we were discussing how do you track velocity and know if you're working quickly enough?

So since we often change projects about every six months, there's the question of how do I adapt to this team? Or maybe I'm still newish to thoughtbot or to a team; how do I know that I am producing the amount of work that the client or the team expects of me and then also still balancing that and making sure that I'm working at a sustainable pace?

And I think that's such a wonderful, thoughtful question. And I have some initial thoughts around it as to how someone could track velocity. I also think there are two layers to this; there could be are we looking to track an individual's velocity, or are we looking to track team velocity? I think there are a couple of different ways to look at this question. But I'm curious, what are your thoughts around tracking velocity?

CHRIS: Ooh, interesting. I have never found a formal method that worked in this space, no metric, no analysis, no tool, no technique that really could boil this down and tell a truth, a useful truth about, quote, unquote, "Velocity." I think the question of individual velocity is really interesting.

There's the case of an individual who joins a team who's mostly working to try and support others on the team, so doing a lot of pairing, doing a lot of other things. And their individual velocity, the actual output of lines of code, let's say, is very low, but they are helping the overall team move faster. And so I think you'll see some of that.

There was an episode a while back where we talked about heuristics of a team that's moving reasonably well. And I threw out the like; I don't know, like a pull request a day sort of thing feels like the only arbitrary number that I feel comfortable throwing out there in the world. And ideally, these pull requests are relatively small, individual deployable things. But any other version of it, like, are we thinking lines of code? That doesn't make sense. Is it tickets? Well, it depends on how you size your tickets.

And I think it's really hard. And I think it does boil down to it's sort of a feeling. Do we feel like we're moving at a comfortable clip? Do I feel like I'm roughly keeping pace with the rest of the team, especially given seniority and who's been on the team longer? And all of those sorts of things. So I think it's incredibly difficult to ask about an individual.

I have, I think, some more pointed thoughts around as a team how we would think about it and communicate about velocity. But I'm interested what came to mind for you when you thought about it, particularly for the individual side or for the team if you want to go in that direction.

STEPH: Yeah, most of my initial thoughts were more around the individual because I think that's where this person was coming from because they were more interested in, like, how do I know that I'm producing as much as the team would expect of me? But I think there's also the really interesting element of tracking a team's velocity as well.

For the individual, I think it depends a lot on that particular team and their goals and what pace they're moving at. So when I do join a new team, I will look around to see, okay, well, what's the cadence? What's the standard bar for when someone picks up a ticket and then is able to push it through? How much cruft are we working with in the codebase?

Because then that will change the team's expectations of yes, we know that we have a lot of legacy code that we're working with, and so it does take us longer to get through things. And that is totally fine because we are looking more to optimize our sustainability and improving the code as we go versus just trying to get new features in.

I think there's also an important cultural aspect. So some teams may, unfortunately, work a lot of extra hours. And that's something that I won't bend for. I'm still going to stick to my sustainable hours. But that's something that I keep in mind that just if some other people are working a lot of evenings or just working extra hours to keep that in mind that if they do have a higher velocity to not include that in my calculation as heavily.

I also really liked how you highlighted that certain individuals often their velocity is unblocking others. So it's less about the specific code or features or tickets that they're producing, but it's how many people can they help? And then they're increasing the velocity of those individuals.

And then the other metrics that unfortunately can be gamified, but it's still something to look at is like, how many hours are you spending on a particular feature, the tickets? But I like that phrasing that you used earlier of what's your progress? So if someone comes to daily sync and they mention that they're working on the same thing and we're on like day three, or four, but they haven't given an update around, like, oh, I have this new thing that I'm focused on, or this new area that I'm exploring, that's when I'll start to have alarm bells go off.

And I'm like, okay, you've been working on the same thing. I can't quite tell if you've made progress. It sounds like you're still in the depths of the original thing that you were on a couple of days ago. So at that point, I'm going to want to check in to see how you're doing.

But yeah, I think that's why this question fascinates me so much is because I don't think there's one answer that fits for everybody. There's not a way to tell one person to say, "Hey, this is your output that you should be producing, and this applies to all teams." It's really going to vary from team to team as to what that looks like.

I remember there was one team that I joined that initially; I panicked because I noticed that their team was moving at a slower rate in terms of the number of tickets and PRs and stuff that were getting pushed up, reviewed, and then merged. That was moving at a slower pace than I was used to with previous clients. And I just thought, oh, what's going on? What's slowing us down? Like, why aren't we moving faster?

And I actually realized it's just because they were working at a really sustainable pace. They showed up to the office. This was back in the day when I used to go to an office, and people showed up at like 9:00 a.m. and then 5:00 o'clock; it was a ghost town, and people were gone. So they were doing really solid, great work, but they were sticking to very sustainable hours.

Versus, a previous team that I had been on had more of like a rushed feeling, and so there was more output for it. And that was a really nice reset for me to watch this team and see them do such great work in a sustainable fashion and be like, oh, yeah, not everything has to be a fire, not everything has to be rushed.

I think the biggest thing that I'd look at is if velocity is being called into question, so if someone is concerned that someone's not producing enough or if the team is not producing enough, the first place I'm going to look is what's our priorities and see are we prioritizing correctly? Or are people getting pulled into a lot of work that's not supporting the priorities, and then that's why suddenly it feels like we're not producing at the level that we need to?

I feel like that's the common disconnect between how much work we're getting done versus then what's actually causing people or product managers, or management stress. And so reevaluating to make sure that they're on the same page is where I would look first before then thinking, oh, someone's not working hard enough.

CHRIS: Yeah, I definitely resonate with all of that. That was a mini masterclass that you just gave right there in all of those different facets. The one other thing that comes to mind for me is the question is often about velocity or speed or how fast can we go. But I increasingly am of the opinion that it's less about the actual speed. So it's less about like, if you think about it in terms of the average pace, the average number of features that we're going through, I'm more interested in the standard deviation.

So some days you pick up a ticket, and it takes you a day; some days you pick up a ticket, and suddenly, seven days later, you're still working on it. And both at the individual level and at the team level, I'm really interested in decreasing that standard deviation and making it so that we are more consistently delivering whatever amount of output it is but very consistently doing that. And that really helps with our ability to estimate overall bodies of work with our ability for others to know and for us to be able to sort of uphold expectations.

Versus if randomly someone might pick up a piece of code or might pick up a ticket that happens to hit a landmine in the code, it's like, yeah, we've been meaning to refactor that for a while. And it turns out that thing that you thought would be super easy is really hard because we've been kicking the can on this refactoring of the fundamental data model. Sorry about that. But today's your day; you lose.

Those are the sort of things that I see can be really problematic. And then similarly, on an individual side, maybe there's some stuff that you can work on that is super easy for you. But then there's other stuff that you kind of hit a wall. And I think the dangerous mode to get into is just going internal and not really communicating about that, and struggling and trying to get there on your own rather than asking for help. And it can be very difficult to ask for help in those sorts of situations.

But ideally, if you're focusing on I want to be delivering in that same pace, you probably might need some help in that situation. And I think having a team that really...what you're talking about of like, if I notice someone saying the same thing at daily sync for a couple of days in a row, I will typically reach out in a very friendly, collegial way, hey, do you want someone else to take a look at that with you? Because ideally, we want to unblock those situations.

And then if we do have a team that is pretty consistently delivering whatever overall velocity but it's very consistent at that velocity, it's not like 3 one day and then 0, and then 12, and then 2; it's more of like, 6,5,6,5 sort of thing, to pick random numbers out of the air, then I feel so much more able to grow that, to increase that.

If the question comes to me of like, hey, we're looking at the budget for the next quarter; do we think we want to hire another developer? I think I can answer that much more accurately at that point and say what do I think that additional individual would be able to do on the team. Versus if development is kind of this sporadic thing all over the place, then it's so much harder to understand what someone new joining that team would be able to do.

So it's really the slow is smooth, smooth is fast adage that I've talked about in the past that really captured my mind a while back that just continues to feel true to me. And then yeah, I can work with that so much better than occasional days of wild productivity and then weeks of sadness in the swamp of refactoring. So it's a different way to think about the question, but it is where my mind initially went when I read this question.

STEPH: I'm going to start using that description for when I'm refactoring. I'm in the refactoring swamp. That's where I'm spending my time. [laughs] Talking about this particular question is helping me realize that I do think less in terms of like what is my output in the strict terms of tickets, and PRs, and things like that. But I do think more about my progress and how can I constantly show progress, not just to the world but show it to myself.

So if there are tickets that then maybe the ticket was scoped too big at first and I've definitely made some really solid progress, maybe I'm able to ship something or at least identified some other work that could be broken out, then I'm going to do that. Because then I want everybody to know, like, hey, this is the progress that was made here. And I may even be able to make myself feel good and move something over to the done column. So there's that aspect of the work that I focus on more heavily.

And I feel like that also gives us more opportunities to then iterate on what's the goal? Like, we're not looking to just churn out work. That's not the point. But we really want to focus on meaningful work to get done. So if we're constantly giving an update on this as the progress that I've made in this direction, that gives people more opportunities to then respond to that progress and say, "Oh, actually, I think the work was supposed to do this," or "I have questions about some of the things that you've uncovered." So it's less about just getting something done. But it's still about making sure that we're working on the right thing.

CHRIS: Yeah, it doesn't matter how fast we're going if we're going in the wrong direction, so another critical aspect. You can be that person on the team who actually doesn't ship much code at all. Just make sure that we don't ship the wrong code, and you will be a critical member of that team.

But shifting gears just a little bit, we have another listener question here that I'd love to get into. This one is about testing a legacy app. So reading this question, it starts off with a very nice note to us, Steph. "I want to start by saying thanks for putting out great content week after week." We are very happy to do so." So a question for you two. I just took over a legacy Rails app. It's about 12 years old, and it's a bit of a mess. There was some testing in place, but it was completely broken and hadn't been touched in over seven years. So I decided to just delete it all.

My question is, where do I even start with testing? There are so many callbacks on the models and so many controller hooks that I feel like I somehow need to have a factory for every model in our repo. I need to get testing in place ASAP because that is how I develop. But we are also still on Ruby 2 and Rails 4.0. So we desperately have to upgrade. Thanks in advance for any advice." So Steph, I actually replied in an email to this kind listener who sent this. And so, I definitely have some thoughts, but I'm interested in where would you start with this.

STEPH: Legacy code, I wouldn't know anything about working in legacy code. [laughs] This is a fabulous question. And yeah, the response that you provided is incredible. So I'm very excited for you to share the message that you replied with. So I'm going to try not to steal any of those because they're wonderful.

But to add to that list that is soon to come, often where I start with applications like these where I need some testing in place because, as this person mentioned, that's how they work. And then also, at that point, you're just scared to ship anything because you just don't know what's going to break. So one area that you could start with is what's your rollback strategy? So if you don't have any tests in place and you send something out into the world, then what's your plan to then be able to either roll back to a safe point or perhaps it's using feature flags for anything new that you're adding so that way you can quickly turn something on and off.

But having a strategy there, I think, will help alleviate some of that stress of I need to immediately add tests. It's like, yes, that's wonderful, but that's going to take time. So until you can actually write those tests, then let's figure out a plan to mitigate some of that pain. So that's where I would initially start.

And then, as for adding the test, typically, you start with testing as you go. So I would add tests for the code that I'm adding that I'm working on because that's where I'm going to have the most context. And I'm going to start very high. So I might have really slow tests that test everything that is going to be feature level, integration level specs because I'm at the point that I'm just trying to document the most crucial user flows. And then once I have some of those in place, then even if they are slow, at least I'm like, okay, I know that the most crucial user flows are protected and are still working with this change that I'm making.

And in a recent episode, we were talking about how to get to know a Rails app. You highlighted a really good way to get to know those crucial user flows or the most common user flows by using something like New Relic and then seeing what are the paths that people are using. Maybe there's a product manager or just someone that you're taking the app over that could also give you some help in letting you know what's the most crucial features that users are relying on day to day and then prioritizing writing tests for those particular flows.

So then, at this point, you've got a rollback strategy. And then you've also highlighted what are your most crucial user flows, and then you've added some really high level probably slow tests. Something that I've also done in the past and seen others do at thoughtbot when working on a legacy project or just working on a project, it wasn't even legacy, but it just didn't have any test coverage because the team that had built it before hadn't added test coverage. We would often duplicate a lot of the tests as well.

So you would have some integration tests that, yes, frankly, were very similar to others, which felt like a bad choice. But there was just some slight variation where a user-provided some different input or clicked on some small different field or something else happened. But we found that it was better to have that duplication in the test coverage with those small variations versus spending too much time in finessing those tests. Because then we could always go back and start to improve those tests as we went.

So it really depends. Are you in fire mode, and maybe you need to duplicate some stuff? Or are you in a state where you can be more considerate with your tests, and you don't need to just get something in place right away? Those are some of the initial thoughts I have. I'm very excited for the thoughts that you're about to share. So I'm going to turn it over to you.

CHRIS: It's sneaky in this case. You have advanced notice of what I'm about to say. But yeah, this is a super interesting topic and one of those scary places to find yourself in. Very similar to you, the first thing that I recommended was feature specs, starting at that very high level, particularly as the listener wrote in and saying there are a lot of model callbacks and controller callbacks.

And before filters and all of this, it's very indirect how this application works. And so, really, it's only when the whole thing is integrated together that you're going to have a reasonable sense of what's going on. And so trying to write those high-level feature specs, having a handful of them that give you some confidence when you're deploying that those core workflows are still working as expected.

Beyond that, the other things that I talked about one was observability. As an aside, I didn't mention feature flags or anything like that. And I really loved that that was something you highlighted as a different way to get to confidence, so both feature flags and rollbacks. Testing at the end of the day, the goal is to have confidence that we're deploying software that works, and a different way to get that is feature flags and rollbacks. So I really love that you highlighted that.

Something that goes really well hand in hand with those is observability. This has been a thing that I've been exploring more and more and just having some tooling that at runtime will tell you if your application is behaving as expected or is not. So these can be APM-type tools, but it can also be things like Sentry or Honeybadger error monitoring, those sorts of things.

And in a system like this, I wouldn't be surprised if maybe there was an existing error monitoring tool, but it had just kind of decayed over time and now just has perhaps thousands of different entries in it that have been ignored and whatnot. On more than one occasion, I've declared Sentry bankruptcy working with clients and just saying like, listen; this thing can't tell us any truths anymore. So let's burn it down and restart it.

So I would recommend that and having that as a tool such that much as tests are really wonderful before the code gets out there into the wild; it turns out it's only when users start using it that the real stuff happens. And so, having observability, having tooling in place that will tell you when something breaks is equally critical in my mind.

One of the other things I said, and this is probably the spiciest take on my list, is questioning the trade-off space that you're in. Is this an application that actually has a relatively low defect rate that users use and are quite happy with, and expect that level of performance and correctness, and all of those sorts of things, and so you, frankly, need to be careful with it? Or, is it potentially something that has a handful of bugs and that users are used to a certain lower fidelity experience, let's call it? And can you take advantage of that if that happens to be true?

Like, I would be very careful to break something that has never been broken before that there's no expectation of that. But if we can get away with moving fast and breaking things for a little while just to try and get ourselves out of the spot that we're in, I would at least want to consider that trade-off space. Because caution slows you down, it means that your progress is going to be limited.

And so, if we're able to reduce the caution filter just a little bit and move a little bit more rapidly, then ideally, we can get out of this place that we're in a little more quickly. Again, I think that's a really subtle one and one that you'd have to get buy-in from product managers and probably be very explicit in the conversations and sort of that trade-off space. But it is something that I would want to explore if I found myself in this sort of situation.

The last thing that I highlighted was the fact that the versions of Ruby and Rails that were listed in the question are, I think, both end of life at this point. And so from a security perspective, that is just a giant glaring warning sign in the corner because the day that your app gets hacked, well, that's a bad day.

So testing, unfortunately, I think that's the main way that you're going to get by on that as you're going through upgrades. You can deploy a new version of the application and see what happens and see if your observability can get you there. But really, testing is what you want to do. So that's where building out that testing is all the more critical so that you can perform those security upgrades because they are now truly critical to get done.

And so it gives sort of more than a nice to have, more than this makes me feel comfortable. It is pretty much a necessity if you want to go through that, and you absolutely need to go through the security upgrades because otherwise, you're going to get hacked. There are just automated scanners out there. They're going to find you. You don't need to be a high vulnerability target to get taken down on the internet these days. So if it hasn't happened yet, it's going to. And I think that's an easy business case to sell is, I guess, the way that I would frame it. So those were some of my thoughts.

STEPH: You bring up a really good point about needing to focus on the security upgrades. And I'm thinking that through a little bit further in regards to what trade-offs would I make? Would I wait till I have tests in place to then start the upgrades, or would I start the upgrades now but just know I'm going to spend more time manual testing on staging? Or maybe I'm solo on the project.

If I have a product manager or someone else that can also help the testing with me, I think I would go for that latter approach where I would start the upgrades today and then just do more manual testing of those crucial flows and then have that rollback strategy. And as you mentioned, it's a trade-off in terms of, like, how important is it that we don't break anything?

CHRIS: I think similar to the thing that both of us hit on early on is like, have some feature specs that just kick the whole application as one connected piece of code. Have that in place for the security upgrade, testing. But I agree, I wouldn't want to hold off on that because I think that's probably the scariest part of all of this. But yeah, it is, again, trade-offs. As always, it depends. But I think those are my thoughts. Anything else you want to add, Steph?

STEPH: I think those are fabulous thoughts. I think you covered it all.

CHRIS: Sounds good. Well, in that case, should we wrap up?

STEPH: Let's wrap up.

CHRIS: The show notes for this episode can be found at bikeshed.fm.

STEPH: This show is produced and edited by Mandy Moore.

CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show.

STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.

CHRIS: And I'm @christoomey.

STEPH: Or you can reach us at hosts@bikeshed.fm via email.

CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.

ALL: Byeeeeeeeee!!!!!!!!

ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.

Sponsored By:

Support The Bike Shed

Episode source