Prior to leading engineering & operations at Slack, Allan was Chief Technology Officer at ServiceNow, where he was responsible for overseeing all technical aspects and strategy. He has co-founded and held senior leadership positions at multiple companies and was a venture capital investor for seven years.
He founded Vyatta (acquired by Brocade), the open-source networking company, and co-authored “Cisco Router Configuration” and “Network Management: A Practical Perspective” and has been granted a patent in the field of data routing.
"I generally say developers want to do three things... They want to solve hard problems at scale. They want to see that hard problem when they solve it... get put to use! The third thing that I think, honestly, is they just don't want to work with jerks.
I think if you master those three things then you end up with a very happy and productive development team.”
- Allan Leinwand
Leinwand previously served as an adjunct professor at the University of California, Berkeley where he taught on the subjects of computer networks, network management, and network design. He holds a BS in Computer Science from the University of Colorado at Boulder.
Mesmer's AI-bots automate mobile app accessibility testing to ensure your app is always accessible to everybody.
To jump start your accessibility and inclusion initiative, visit mesmerhq.com/ELC
Jerry Li: While Patrick And I were preparing for our conversation, I was reflecting on my past experience as an engineering leader, sitting in different planning meetings where we often hear the term "butts in seats." In management, I think it's easy for people to abstract, to people as a number...
But having a more human approach to productivity and management, it actually can unlock the true potential of engineering teams. And that is why we're so excited to have this conversation today to explore more of a human side of developer productivity and provide a fresh perspective.
Allan Leinwand: That's great Jerry. Yea I think that being able to think about not just butts in seats or individual team members or numbers on an org chart. But being able to have the empathy, to understand what the teams are doing and being able to think about how do we make them successful, because that's your job as a leader.
Your job as a leader is to take the resources that you have at your disposal, apply them to the business needs, but do it in a way that allows everyone to be successful.
No one wants to be in a role where they're can't be successful. No one wakes up and says, "I can't wait to go to work and do a horrible job!"
But everyone has the best intent in mind. And it's your job as a leader to sort of take that potential, unlock it and let them be productive.
And one of the keys to that, as you mentioned, is sort of like this world of developer productivity, and making the world easier and pleasant to get your job done. And no one wants lots of roadblocks and hurdles to get their job done.
Patrick Gallagher: So when Jerry and I were thinking about how should we approach this conversation? Cause I think there's so many different sub-topics we can go into about this. So I think we wanted to start from a high level to get a sense of your perspective or how you approach developer productivity in general.
Is there a certain story or experience that comes to mind for you when you think about the challenges or, or issues related to developer productivity from your experience?
Allan Leinwand: Yea I think there's a lot of them. But as, as I was thinking about this question, as you were saying, it is sort of trying to come up with one story or one event. And I don't think there's one, I think there's a cyclical pattern of things you see that happens.
The cyclical pattern is... there'll be a development team. There'll be working on something, they're working on a deadline. And they make some assumptions about the tooling and the framework and the testing and the infrastructure that's there to help them get their job done.
And as a developer you think about the code. And yes, then you think "I need to run a PR and I need to push it it and then I need to run some tests and I need to merge it and I need eventually to deploy it."
But what you're really worry about is the code. And what you don't want to end up happening is all that other scaffolding and infrastructure to get in your way.
So I can think of lots of events whereby the team is getting ready to push something. The team is trying to do a bug fix. A team is trying to get a feature out the door. And now the test framework has failed. Or now the ability to integrate is failed. Or now the deployment mechanism or the experiment framework that turns on that new feature for a particular cohort has failed.
And those are the events where you realize how important it is to have the scaffolding and the support infrastructure around developers to make them as productive as they need to be. Whether it's unit tests, whether there's functional tests, whether it's end to end tests. Whether it's integration frameworks. Whether it's code scanning, looking for security issues or actually deployment tools or experimentation, AB testing launching using light mode vs dark mode. All these things in some way or another can affect how productive developers can be.
So I don't think there's one exact moment that hit me, Patrick. But there's definitely repeated events where like "Yeah, that's right. Oh man. now I got to go figure that out..."
"Oh wow! The test frameworks down. We have to get this feature out..."
"Oh wow! We're in the middle of an incident, we gotta get this bug pushed out! But the deployment tools are broken... so how do I, how do I figure that out!"
"Or Github is having an issue. Or maybe we're having the issue with Github? I don't blame them. Maybe we're having an issue with our integration to Github. So, how do I get past that? What's the backdoor, what's the emergency merge lever I can pull? There's repeatedly sort of experiences I've had where we have to think about the golden path doesn't work. The happy path doesn't work. And now what are the backup paths?
So that means when we build these systems, we always think about, "Okay, that's great. But what if that's down? Now what? And that's sort of the experience that I take away is we always ask ourselves, "Okay, if that testing framework's down and we have to merge, how did we get through it?"
So that's the one thing I think about is always trying to think through the various pieces of the entire development, productivity lifecycle. And making sure that we have both a happy path and maybe a not so happy path to get through it in times of crisis.
Jerry Li: Yeah. that also resonates, going back to the, earlier point you mentioned that when you talk to potential engineering leaders to embark on the path. Also think about it, the downside of all that, the challenging part of it. When things does not work out.
Allan Leinwand: I think that's always just about risk mitigation, right? So you're talking about risk as a manager or risk as a developer, risk as a product company, scrum team trying to get something out the door. What are the various risks you could encounter...
And you hope you never have to deal with those risks. But, you know, we all buy insurance for a reason. So we buy insurance because think there could be risk. And when you think about developers and developer productivity, if you think about the risks and get ahead of them, then you make people more productive.
Patrick Gallagher: So the natural follow up question that comes to my mind when we talk about understanding the life cycle and determining different systems, to be able to navigate the happy and the unhappy path. What would you say is your approach to managing developer productivity? What are some of the essential components that you consider when you're helping build out that strategy?
Allan Leinwand: I think one of the key things that we think about is obviously building the right tooling. So what is the tooling that developers want?
What I find doesn't work very well is when you prescribe "Thou shall use this ID. Thou shall use this test framework! Thou shall use this particular set of integration tests!"
A lot of this is very code based dependent. Whether you're working in Java or whether you're working in Collin or whether you're working in other code. So we need to think about different types of code bases. The needs those different types of code bases have and the tool sets that apply to those. Whether you're writing in Swift or whatever you're writing in.
So you know one of the things that we think about to make developers happy is to come up with that happy path is... "What is the tool chain that applies to their environment? What is the tool chain that allows them to be most productive?"
And even if it's not a uniform set of tools that every team is using, we try and apply a uniform set of metrics to that. Those uniform set of metrics would be like," How long does it take to get a development environment? How long does it take me to check in code? How long does it take me to run my set of unit tests and integration tests? How long does it take me to actually merge my code back into main or master? How long does it take that master code to be integrated and then get ready for deploys? And how long does it take to get deployed?"
In a Saas model like here at Slack, we can deploy in minutes. Maybe you're building something for the app store or the Android or the Play store and those deployment cycles are weeks.
But, again, if you sort of measure each of those increments, as people go through that entire workflow of being a developer... that's what we try and optimize regardless of the code base or the tool set or the tool chain that they're using.
And that's how I think you make people happy. If you're a developer, you know, you want to be able to get a development environment quick. You want to be able to have it integrated into your favorite tool chain. You want to be able to start pushing code and get it code tested. You want to merge it in as fast as possible, and you want to get deployed without a whole lot of headaches and concerns about if the deployment is going to be successful or not.
So being able to sort of look at the various steps in that workflow. And then think through what are the tools and the metrics you can look at every step along the way is how we think about that... that's the overarching philosophy we have in terms of helping people be productive.
I generally say developers want to do three things... They want to solve hard problems at scale What I mean by hard problems... that could be a single algorithm. It could be a, literally a scale to millions of endpoints for the problem. It could be a ML AI model you're building.
But they want to generally solve a hard problem. I don't think any developer wakes up and goes, "Yeah, I'm gonna go do some easy stuff today!"
They just want to be mentally challenged and I want to work on a hard problem.
I think the second thing is they want to see that hard problem when they solve it get put to use! Again, it could be used by one person. It could be put to use by a million people, but they want to see it deployed in use!
And then the third thing that I think, honestly is they just don't want to work with jerks. So if you think about like happy developers, it's how do we get them into the code base solving the hard problems, solving the problem of scale? How do we show them that that's not a science project, that's going to live in a dark corner that's never let a day... And how do we, put them on a team that supports them?
I think if you master those three things then you end up with a very happy and productive development team. And then you combine that with, you know, the metrics and the analysis of that entire pipeline and continually measuring and analyzing that entire pipeline is important.
One of the examples I'll give you is we have a monthly meeting, which we go over our developer metrics. And we go over a lot of the metrics I'm talking about right now.
We go over "What are the build times on Android and iOS and windows and Mac OS" and all the other platforms that we support here at Slack.
Then we go over, Great! What's the time to merge? And then we go, What's the time to deploy? What's the time to actually between releases that we're actually able to get a release out? Um, And we measure these in discrete elements and we talk about them.
And sometimes we say "The time to good involvement environment, we want it to be under a minute."
And we go "Great! We got it under a minute! That's super exciting!"
But the question then is "Can we get it to 10 seconds?"
So I think that, you always have to push the limits. And I think if you build a set of infrastructure and tooling and a developer environment where the developers know you're on their side. And you're always fighting to make their path to writing fun code, you know, getting it deployed and not dealing with jerks, basically.
I think they'll be happy and supportive of the environment. And it'd be a super exciting place to work.
Patrick Gallagher: You were talking about different, productivity metrics... What are some of the important conversations that should be happening around those dev metrics that also help better account for the people involved in producing those metrics?
Allan Leinwand: One of the fallacies about metrics is they're there to sort of be the overwatchers or some of the metrics police. Uh, metrics are just numbers. Really, metrics are a vehicle to get to the right answer.
So I think that some of the important metrics that we look at to get to the right answer are again, things along the lines of how quickly can a developer get an environment? How quickly can they get tests written? How quickly can they get a build deployed? How quickly can they get that code merged? How quickly can they get things out the door?
And we measure this thing called cycle time. And cycle time in our world is literally from the moment that you write the first Epic slash story slash idea about the code all the way until the code is deployed.
And sometimes that's cycle time, it can be hours or minutes, and sometimes it can be months and months. But it's important to be able to understand every component of that cycle. So that way you can always be optimizing it.
I think if you present it to developers as... " The metrics we're measuring are there because we want to make your lives more productive. We want to make it easier for you to get the job done and do the stuff you love."
Then it's not about "Why are you measuring me? And what's this metric about?" It comes with a, an honest, empathetic view of "We want to understand what's holding you up. And by understanding what's holding you up and where things are being slowed down in the process, we can improve it."
Now you have to back that up by actually doing the improving. You can't just talk about it. You actually have to deliver on the improving. And that's where sort of like the engineering part comes in.
We had a situation at Slack back in, I don't know, months ago, probably over a year ago in COVID times. Who the hell knows... where we couldn't get our development environments to our engineers as fast as we liked. So we put a concerted effort around faster developer environments.
And now when we survey people and say, "Are you able to get a development environment fast enough?"
That's never a top survey response and complaint.
So the other part of your question is we actually do run surveys, asking people about pain points on a regular basis. We have a quarterly developer productivity survey. And then we actually also run other intermediate surveys. Cause as we are making changes to the environment, to make them more productive and to give better tooling and to have different systems in place. We ask people, "What do you think about this? Is this useful? Is it not useful?"
And, And we try and, you know, quickly iterate based upon that feedback.
So the, the short answer to your question is... We hold monthly meetings to understand the various metrics and hold ourselves accountable to pushing those metrics in the right direction.
And then the second thing we do is, we do have questions because we think, "Hey, we just moved build times from, I dunno, 10 minutes to three minutes! Did anyone notice or care? Was that really important? We thought it was important. But was that just when people like to go, you know get an extra cup of coffee now that the build that finishes quicker, they don't care? Or do they actually care?"
We want to understand that in a way that suits the needs of our customer, which is really developers. From the developer productivity point of view.
Patrick Gallagher: When you're looking at those, are there certain inputs or things you're looking out for? And how do those, the qualitative and quantitative inputs change or impact your decision making with the organization in terms of how you're prioritizing or allocating or changing the structure of the organization? Would love to dive more into your thought process with that.
Allan Leinwand: And just a little bit deeper on those. For each of those metrics. We look at the P 50, the P 90 and the P 99 of each of those metrics. A lot of people, when you look at the P 50 sort of the median number or the average number... the details can get lost. You know, "Hey, the average build time across iOS is seven minutes..."
"Looks great. Sounds reasonable!"
But the P 90 is, you know, 42 minutes. Like "What the hell is going on there? What happened?"
So I think the way we think about things is we spend a lot of time looking at the data. We spend a lot of time understanding the data and understanding the pain that people are feeling.
We also spend a lot of time thinking about the actual process that people are going through and making sure that we understand exactly why they're feeling, what they're feeling.
Let me give you an example. We were looking at one particular team and we saw the cycle time. The time from when the app was written and the code got deployed was fairly well extended. The team was a bit of an outlier. I couldn't quite figure it out.
As we dug in deeper to that team... we found out the number of epics assigned per person was 2x the number of epics assigned per person on all other teams. So basically this team was essentially piling on and making teams multitask. And sort of not have that quantitative downtime to figure out the problem and solve things.
So each one of these team members was slowed down. The cycle time for all N epics for most of the other teams was then divided by two epics... was actually a lot slower just because, we had a process all about two more things at once for each individual team member.
So we gave guidance to that particular team, "Given your you're at the P 90, sort of. If you will, epics per person in that particular metric... why don't you come back to the P 50, make it N divided by two. And let's check your cycle time."
And sure enough, the cycle time went down significantly. The team got things done. Maybe not... they didn't have as many things in flight. But the ones that they did on flight got done properly.
And that was just a simple use of a metric looking at the number of epics per person. And then being able to help them understand that metric was helping drive long cycle times. It wasn't about tooling. It wasn't on a developer productivity in terms of like "code faster!" and "type faster!"
Things like that. It was just about a managerial style. Being able to let people focus. Get their jobs done better.
Jerry Li: Do you measure the other side of the productivity, as you mentioned how fast they code
Allan Leinwand: no, no, , we don't do that.
Jerry Li: We hear a lot of complaints that people have concerns being measured on those metrics. What's your advice for people that are still doing that?
Allan Leinwand: We don't look at, you know, lines of code submitted or PRs. We don't have a PR rate. We obviously have like a PR's per dev because we just wanna understand what's going on. But we don't have a expected number of PRs per dev, per hour, day, week, or anything crazy like that. We don't look at tests per team or code coverage by tests because a lot of those metrics when people understand what they're being measured on, they game them. Right?
Like my favorite one in the past is we used to say, "Every PR must have a test. And must have a passing test!"
So great! We've found people writing, you know, test one return, one return."
And that was it. So the test just returned one and that was the end of it then they could say, "Check! We have a test per PR!"
People don't have bad intention. No engineer wrote that because there's like, "I got 'em, I'll figure out a way to screw this metric up."
They did it because they went to get their job done. They're just like, okay, I gotta check that box. So that was a bad example of a policy or a metric that was sort of being enforced for no good reason.
So to your point, Jerry, I think being able to manage my metrics is not the goal here. The goal is to have a better understanding of that developer workflow, using metrics to gain that understanding and focus your efforts to remove pain in each step where it does exist in that workflow, as you can. Metrics aren't about measuring productivity or grading someone. I've never been involved in an engineering calibration promotion discussion that had a, "Well, this developer did this many PRs..." It's NEVER been part of my career.
If that has ever happened I'd I don't know what I do, but I would, I would object violently. Because that's not really part of, but what has that to happen?
What has to happen is you have to ask yourself "If this team is not as productive, Team A is not as productive as team B... what's in Team A's way? What can we do to help them? Are they having trouble writing tests? Are they not getting the regression paths that they pass as they need to through the test suite? Are they having an ID that they're using that doesn't integrate well with our backend systems? Is there something else there that's slowing them as they go through this cycle time to being productive?"
I guess my advice, if you're measuring and managing engineering teams by metrics, I think you're missing the point. Metrics can give you information about the car you're driving, but they don't drive the car.
Jerry Li: What I find fascinating about the points you shared is that all the management and to metrics are around the developer experience and that eventually leads to better engagement. So they feel more empowered. I think that's definitely the right path.
Allan Leinwand: Yeah. I mean, we have a, we have a team, we have a developer productivity team who's responsible for our quality engineering practices, our developer frameworks, our developer environments, our testing frameworks. And in some organizations, again, not ones I've been involved in thankfully, but I've heard that the quality teams and the development teams are almost adversarial...
That's not the way things should be. Developers should want to invest in quality. Quality should want to invest in development teams. Productivity should overlay both of those in order to make them successful. Because at the end of the day, you want to produce a product that people love that comes to market fast and you don't have to work with people that are in that adversarial relationship.
Patrick Gallagher: When you were talking about the three things that developers want and that for your role as an engineering leader ... to help them spend more time doing the things that they really want to do...
In the conversations that you're having with the different leaders or different developers, how do you bring back the metrics that you're measuring back to helping reinforce ultimately, this is all about what you want to do and what you wanna achieve as a developer?
Is there a way that you approach or reinforce that conversation?
Allan Leinwand: A little bit. You know you want to work at scale, you want to ship and then you don't want to work with jerks. Or you don't want to work in a productive environment is probably the more polite way of saying it.
We do produce a JIRA dashboard that sort of starts to talk to those first two. So we have a JIRA dashboard that does talk about, "Are you working on the certain problems at scale?"
So that lists epics per person, velocity of accomplishing story points, and burn down charts and all those things you'd have in there. There's that portion of that JIRA dashboard can picture of my head.
And then there's the other part of the JIRA dashboard, which is like the ability to deploy and did the deployment go out and are there incidents from that. Or the incident remediation action items, as a result of that deploy. Did an incident occur? And do we have some action items we need to follow up on?
So there's like the ying and the yang of that, that we do provide. So every team gets sort of like this developer productivity dashboard. Probably the manager looks at it fairly regularly? I'm not so sure all the devs do, but that's okay. They don't have to. They just want to be, focused on their code and focused on their stories and their epics and they want to know that they can deploy.
But the managers should sort of have that dashboard visibility in terms of "What are we working on? What's our velocity looks like. Are we being held up because of some portion of the developer productivity pipeline? And then are we producing things that when they get out and get deployed that are causing customer pain? And if so, how do we weave and balance that feature versus quality balance a little bit within my team?"
So that's what we want our engineering manager is thinking about. And so it's important to give them the tools to look at that. And look at those specific metrics.
Again, I think of it as like you're driving the car and you're flying the plane, you know, you're still doing the driving, you still know where you're going, but the data gives you some indication of how well you're doing around the way.
But, you know, I don't, I never drive from here to my mom's house and say "That was a good drive. I went 4.3 miles and 62 miles..."
I think I don't talk about the metrics! I just say what the drive was pleasant and crash free, right?
Jerry Li: That's a fascinating analogy!
Allan Leinwand: I just came up with it. Seriously. It's just like, you know, you think about that, right? You don't ever get off a plane flight and think about the metrics of it. The pilot staring at the metrics the whole time. But you as the passenger, the one that's using that facility, you know, you really care about " Wasit on time? And did it get me there? And was it, crash free?"
Jerry Li: Are the managers accountable for improving metrics to create a better environment for developers to be productive?
Allan Leinwand: They're not responsible for improving their metrics. There's no judging of managers based upon the metrics. There's judging a manager at based upon the product they produce or the infrastructure they're rolling out or in the developer productivity team, the tools that are produced to help that.
So there's no manager scorecard based upon these metrics. These are just tools we want to give managers in order to help them make their teams more productive or understand where they might be out of balance.
Maybe you have too many folks, like I said too many epics assigned for a person. Maybe you've got too many incident remediation action items that you need to burn down that are coming up due because there's SLA's on those. You have to depart some resources. But there's no scorecard where, you know, every line's a manager and we sort of have any sort of metric against that.
There's nothing with that. I think that'd be counterproductive. That's not the idea, the idea isn't to judge one team versus another or one team's productivity versus another, because every team is working on different things. They have to understand how to make each team productive.
I love the analogy that you shared about when you drive to your mom's house, you don't care, that it was 4.3 miles at 60 miles an hour, but it's about, was it a pleasant experience?
The next question I have is related to people's I guess, experience of, of doing the work or producing the metrics and diving more about like developer happiness.
How do you understand if your dev teams aren't happy? Or if like their experience of work, isn't going well? And what do you do about it?
Allan Leinwand: Yeah. Well, fortunately here at Slack, we have a tool called Slack. And our developers are not shy about using it... is probably the next way of saying it.
So we do have channels that allow folks to air grievances and speak to the issues that we're seeing about their productivity and tooling or testing or frameworks or other processes that aren't working for them.
We do also, like I said, on a monthly basis, we do aggregate these metrics and we do look at them all the way up through the entire engineering leadership chain to understand what is happening in the environment. And sometimes that can be seen as a little bit of a 10,000 foot view. But I think important to always drill down to those areas that are not working well and get in and understand what we're doing to fix it. So, you know, we have a lot of leadership support from the top to drive that down.
And then I think the other thing that we do a lot is, as I mentioned earlier, we do run maybe too many, but we do run surveys. And are trying to plot a line. So the first survey data is interesting, but you know, the third or fourth is more relevant because you can sort of plot the line of "What's in your way. What's most productive."
One of the things that, we see in some of our environments is we'd have set of what we call "flaky tests." So tests that you run them in pass one, n percent fail. You run them again in past two and you know, one-third of n fails. So you could assume that 60% of those tests aren't giving you a good signal.
So we saw that signal come to us in the metrics. We heard that feedback come to us in surveys. We saw it in channels inside of Slack, and then we put some projects in place to significantly lower flaky tests. So the testing frameworks itself and the infrastructure testing. Infrastructure in terms of the services that consume the infrastructure consumes, gives good signal to noise as opposed to just a whole lot of noise.
And then we run surveys afterwards and say, "Okay, we think we improve test flakiness by 50%. Do you feel like that's better now?"
And we ask people for qualitative empathetic feedback, as opposed to just qualitative metric feedback.
Jerry Li: What happens when the survey result does not align with goal that it was triggered the effort? What are the things that can help you to debug and to continue?
Allan Leinwand: Yeah, I think the answer is go back to the drawing board. So then it would be, you know, hopefully we'll get some good qualitative feedback. Most of our surveys have boxes, text boxes that people will type in. And we'll go back to those individuals and say, "We want to understand more. We want to understand why this didn't help. We understand the tooling that maybe that's helping you understand the environment that maybe your code isn't building fast enough. Tell us more about this."
We're naturally curious, in the engineering team here at Slack and most of the engineering teams that I've worked on. We're naturally curious. We naturally want to make things better. And, you know, I think the short answer is, like I said, we had this developer metrics meeting every single month. And I can't think of any month. We're like, well, everything's great. Our job here is done!"
It's always there's more to get done. There's always more to dig into. So what would happen, Jerry, specifically is let's imagine we had, I'll make it up... we had increased iOS build time by 50% we thought, "Great! That's awesome. We're done!"
And we're do a survey and the iOS developers and they go, "Our build times are horrible!"
Then we would have to go back and say "Did we miss the mark there? We decreased it from X to X over two."
And then, you know, we might hear. "Yeah. But when I worked at Google, when I worked at pick your favorite company, it was X over 30! We're still eight orders of magnitude longer than we should be."
That's news to us. Then we'd have to go figure that out. We've had had certain events like that occur and it's sort of like, you do what I call the tilty head kind of go, "Hmm, okay?"
And then you try to understand a bit better and circle back again. And just like every Engineering or every process that's out there is itterative. And we don't, we don't think of one, but we keep thinking of looping over and over again. Try and continue to refine things.
Jerry Li: That's the craft.
Allan Leinwand: Yeah, that's the hard part, right? I mean, there are certain projects that are done, but I don't know of a lot of code on the planet that people go, "I am done. I never need to refactor or re-engineer that ever again?"
There's always something that needs to be done.
Patrick Gallagher: Another scenario, Allen... what happens when productivity metrics go up and it looks like things are going well, cycle times are shortening or the other ways that you're measuring performance are going up, but the qualitative feedback from developers is that it's decreasing or that it's going down.
So productivity up, happiness down. How do you typically handle a situation like that?
Allan Leinwand: I guess if I was presented with that situation for a particular team or set of teams I would have to take a look at some other things. Some other ways of understanding what's going on.
I'll give, I'll give you an example. Here at Slack, we had a... pattern over the past year where people were not having enough time to sort of spend time to actually doing coding. And when we actually went back and surveyed our developers, they were saying they didn't have enough time to do "deep thinking."
So we did a little bit of data analysis. We queried people's calendars, and then we found out, you know, The average developers are doing this many meetings a week and they actually don't have more than two to three blocks of two or three hours the entire week where they can just sit down and code, like just heads down and code. Cause there's a company meeting and there's all hands, there's a team meeting. There's a something.
So we actually devised something we call "Maker Time" here at Slack. This is an example of what we did. And we said I think it's Tuesday, Wednesday, Thursday, Friday for three hours in the morning for all individual contributors, no meetings. Full stop. Like schedule time around that.
And. That's an example where we felt like we're moving the developer productivity bar in the right way, but people still weren't feeling productive. So we went back and said, "What else is there in the system we could look at?"
And the survey results told us "I don't get enough time to focus. I don't get that focused time. I can't get in the flow." Okay, great! So we, instituted makertime.
Now for managers, that's hard to do because you've got interviews and all sorts of the things we didn't do it for managers but we did it for IC.
And generally that's been well received. We've been running that for a number of months now. And there's time zone issues we had to deal with. Cause you know, in the morning in San Francisco is in the afternoon in Dublin and it doesn't really work well with Pune in India. But you know, we've tried to adjust a little bit.
But I think that the way I think about your question, Patrick is look outside of what you're currently been measuring. To see if there's other things that could be affecting that happiness or discontent. Ask a lot of questions and then see if you can devise an answer to put it together.
Will "maker time" here at Slack last forever? Don't know? Do certain people, and I think the natural inclination of engineers to be cynical, somaybe if you're listening to this you're like... Great! I'll just sleep in for three more hours in the morning. During my maker time, since I know I don't have meetings.
Okay, if that's that's what makes you productive? Great. I don't prescribe that and how people use that time. I just want them to have that time in order to be more productive and see if that actually correlates to them being happier.
Jerry Li: Yeah. Every developer wanted to get it into flow zone as much as possible. And applies to other people as well! Our, our team has been sort of trying that because we're a small team and a lot of meetings and, just making small changes, that definitely helps a lot for focus and engagement.
Allan Leinwand: I sometimes wish I could have that time. I look at my calendar every day and say, "I wish I could have two or three hours..."
But that's not my job. My job isn't to get in that flow. But I do sometimes, personally, do try and block out I call it catch up time. So I'll block out two or three hours. And to me, it's the flow of getting into messages and emails and, you know, sort of reviewing the documents that are longer, that do require concentrated time.
So I definitely get the benefits of that. It definitely makes me feel more productive, even in my, my limited amount of individual contributor mode that I get to do here.
Patrick Gallagher: One thing I want to acknowledge Allan, and it's like in our interactions, it feels like you do a really incredible job of centering yourself and when you switch from different engagements, because I feel like whenever we've had different conversations in meetings, you've just come from something, but immediately you're zeroed in and focused and totally present.
And I think that's such a hard thing to do the task switching and then to be centered and totally engaged in the things that you do.
Do you have a secret that helps you in between some of those fast paced experiences?
Allan Leinwand: Yeah, I mean, part of my job is to context switch incredibly hard. I was just looking at my calendar. And on average, I have 12 to 14 meetings a day. So I have two secrets if you will.
Well one of them is I keep a physical list. So I have this notebook and every day Every day, I write down on a list of tasks that I'm working on. At the end of the day, I look at the tasks that are completed. I take the tasks that are not completed. I put it on a new sheet of paper. I guess I could do it in pick your favorite tool, but I just happened to be a pen and paper guy. But on a new sheet of paper.
And what it allows me to do is be in the moment and be present because I KNOW that anything that I'm worried about or anything that I have to do is in a spot and I know where to get it.
So for me, it allows my brain not to wander and be thinking about that past meeting, "Oh, I gotta do this and this and this off the previous meeting" or "Wait I have a meeting to prepare for so I have to be thinking three other of the things."
I just know that it's always there. Like this is my crutch and I can go there.
The other thing I'll tell you that I do do is I do meditate. I try and meditate at least 10 minutes a day. I won't tell you I hit every day. But I will tell you, I do try and give my mind time to be still and quiet. Because I think that does allow you to train your brain, train that muscle, to be able to switch from a crazy chaotic world to a still world, and then go back again.
And I think just that context switching and that forcing your brain to exercise that muscle memory, no pun intended uh, is super useful.
Patrick Gallagher: One of my favorite quotes from different meditation teachings is "How you do one thing is how you do everything." And so I feel like the practice of meditation is a metaphor for how you essentially task switch in that, how you meditate it's probably also how you context switch and task switch.
Allan Leinwand: Yeah, I'm not a great meditator. I've I've only been meditating for a couple of years. I won't profess to be anything but a novice. But I will tell you that I sometimes force myself like, I think there's a quote and I'm going to completely butcher the quote, but I think it's something by Gandhi. Something along the lines of, "I had a really busy day and I couldn't meditate for my hour. So I'm gonna meditate for two."
He forced himself to take the time. So I do force myself sometimes to take that 10 minutes even in the middle of an incredibly busy day. And I think that does train me to sort of like, as you say, Patrick context switch...
Know that anything I have to do, that's critically important will be in my notepad, in my Slack channel, in my notes doc, somewhere. And it'll just be there waiting when I get back.
I don't have to remember it. Cause that's what I find I think about a lot is... the things to do post the meeting or the things to do pre the meeting. And if I can clear my mind of those activities, because I know there somewhere else... It helps me.
And I think the other trick about context switching and again, this is gonna sound counterintuitive, but I think make shorter meetings. I try not to have meetings that are more than half an hour. I try and force my meetings to be shorter half hour. Which means more of them, but also means are more focused on the right things.
I sometimes find people want to have hour meetings with you that really could... we've all had this experience. You have an hour long meeting and you realize that could of been 10 minutes because you just had to get to the point.
When you force people to have a shorter meeting time with you that in turn forces them to focus the conversation, which means you get to the point faster.
technically do have half hours; much shorter than a half hour is hard, but sometimes I do 15 minutes. But hour long meetings are rare, for me.
Patrick Gallagher: One more question, follow up to some of the comments that we had talked about earlier, talking about metrics and identifying which ones are the most effective.
How do you identify metrics that may be indicating the wrong thing or incentivizing the wrong behavior?
You know, when some of the examples you shared earlier referencing different people trying to game the system. How do you identify that a metric is serving the wrong purpose or indicating the wrong thing?
Allan Leinwand: You know, one of the things that you find, if you have a metric that is consistently hitting its target without effort without concentrated focus... then you ask, are you really measuring the right thing?
So there are certain cases. We set up metrics for ourselves. We have quarterly metrics. We use OKRs. We set up KR's, you gonna find it out in the first week of the quarter, you're hitting that KR. Then the next 11 weeks of the quarter kind of cruising. The question you ask yourself is, did we really set the right metrics here?
Conversely is also true. If you have a team that's killing it to try and hit a particular KR for 12 weeks and they don't come close. Then you ask to really hit the right thing really set the right KR. I guess, it takes practice. It takes iterative cycles and it takes trial and error.
I don't think there's necessarily a formula that I have for doing that other than try it out. If it feels like it's a good earnest effort to get there, then you're probably doing the right thing. If it's too easy or too hard, then, you might think about changing that metric of it.
Patrick Gallagher: So with everything we've talked about so far today, you know, now a lot of our workplaces are evolving into this post COVID world. And a lot of the norms being established by companies are trending more towards remote and hybrid contexts.
Do you have any insights related to how some of the principles that you've shared here translate or impact. Dealing with a more remote or a hybrid world.
Allan Leinwand: I think that one thing you need to think about if you go into a more remote or more hybrid world, which I think is definitely here to stay... is to understand how do you communicate about these metrics to your team members?
I mentioned that we use JIRA dashboards here. We use Slack channels, obviously. We post information via bots and apps into Slack all the time.
Like this shouldn't be a, you know, a set of dashboards that a select number of people can see in a dark room, somewhere in a corner of behind the Passkey. This should be, metrics that are fully visible, fully transparent than anyone in the org can see. And all the metrics I'm talking about they're non-private channels anywhere They're out there for one can see.
And I think that the way think about the remote world, the world of hybrid, is you democratize both location, access, and the metrics themselves. And you just make them applicable and available to everyone.
That's one thing I think I'd really encourage people to do is as they're building these frameworks, as they're building these tool chains, as they're finding these bumps in the road... share them! Be transparent about them!
Because it's going to matter to someone who's sitting at home to feel connected. It's going to matter to the folks that are maybe in an office or offices to be able to collaborate together. It's going to make the entire team feel inclusive if you share the metrics and everyone has the same visibility. And I think that will benefit everyone no matter where they're working or how they're working.
Jerry Li: In other words, a transparency, or the level of transparency is a multiplier to engagement, to productivity in the remote and hybrid world.
Allan Leinwand: Yeah. I think, you, if you assume that you won't be able to bump into people in the hallway or the elevator or the kitchen or wherever your office has for you to have those ad hoc conversations. How do you facilitate those ad hoc conversations in other mechanisms?
Well you can facilitate them by having the data available and then allowing people to chat in Slack about them. You can facilitate them by having, you know, messages that are being sent or Zoom calls where you sort of don't really care where people are sitting. Or don't really care if they happen to be in the right room.
You know, using the Hamilton quote, "you don't have to be in the room where it happens."
You can actually be anywhere and get the same data and that room can be anywhere on the planet, as long as you have the right tools and the right way to make that data available to everyone.
The goal of, of having data and understanding this developer workflow and where the pain is, is not to hide it and then go in a back room and fix it. It's to expose it! To let people bring their ideas. It's to let them understand that you see the pain of, of success that they're having and to celebrate both equally and transparently.
And I think being able to do that across the organization is, is hard. But super powerful! To be able to like stand up and say, "Here's where we're at. Here's the good, and here's the bad here's what we're working on together."