solving issues along software development lifecycle

Using Objective Data To Tackle Issues Within Software Development Teams, Part 2

In Part 1 of this article, I reviewed some of the common issues arising at the software development project planning and development stages and showed on real-life examples how to use objective data and insights for issues resolution.

In this part of the article, let’s see how to use objective data to tackle issues at the code review and software testing stages of your software development lifecycle (SDLC).

Discipline Issues at the Code Review Stage of SDLC

Let’s start with probably the most common issue – overlooked pull requests. First of all, one of your team members may simply not have assigned a reviewer to a pull request that will be forgotten down the road. Or, for example, your developer has forgotten to move the ticket to the “in review” tab in Jira. We should keep an eye on this and use simple alerts to keep your team aware of them. No matter how many reviewers you may have assigned to a task, set up a system of simple signals to make sure each accountable person is aware of the code review task.

Another discipline issue you may encounter at the code review stage is as follows: the reviewer opens the task in Jira, but cannot quickly understand what task a pull request relates to, so they decide to look into it later, which, in fact, never happens. The solution could also be to set up an alert system to make sure each pull request has a link to Jira ticket, and the reviewer gets a proper and timely notification of the task.

Let’s say you have a big pull request in progress, the reviewer opens the task, sees the pull request complexity, and decides to get back to it later as the task is too resourceful and time-consuming. Such requests can pass into oblivion quite easily, too. Make sure your massive pull requests are clearly described and formatted correctly in Jira. Besides alerts set up, your product owner or PM can help onboard the reviewer with comments provided along with the request.

Half of all Agile teams I’ve worked with didn’t have linter configured for some reason, which is a bummer as linter helps automate code syntax review processes.

Predictability Issues at the Code Review Stage

Sometimes there are teams overloaded with code review tasks, which isn’t good for team success. In one of the teams I worked, the CTO was at the forefront of coding himself, and he assigned himself the role of the chief code reviewer. As such, in a 6-person team, 50% of all code review tasks were assigned to the CTO. And the backlog kept growing.

As a result, as it is easy to guess, only 50% of all team iterations were closed, because the CTO did not have time to check everything. When they introduced an elementary disciplinary practice that the CTO couldn’t be assigned more than two or three tasks in the iteration, the team closed the sprint with 100% tasks completeness.

Sometimes, your code review practice turns out to be a holy war within a team. Triggers can be as follows:

  • You have a thread with more than two answers from each participant;
  • You have too many code reviewers on your team ;
  • There is no commit activity, while there is activity on comments.

All of these factors should alert you that something is going wrong at your code review stage.

Quality Issues at the Code Review Stage 

You already have issues with quality if your code review is pretty shallow and incomprehensive. There’re 2 good metrics to measure the quality of your code review:

  • You can measure each reviewer’s activity as a number of comments provided per every 100 lines of code;
  • You can measure the impact of each reviewer, i.e., a percentage of comments per one line of code that was changed as a result of code review.

These metrics will allow you to detect the most effective and the least effective review and make conclusions as to what should be changed in the process to improve the code review quality.

Another quality issue that may arise at the code review stage is that the developer brings very raw code to review and the review turns out to be just a bug fixing of the most common issues and errors. This takes a lot of time and effort, which affects the quality of the code review.

The key metric here is a code churn, i.e., a percentage of changes in each pull request after the review has started.

If you’ve had a code refactoring and things got pretty unclear after it, don’t rely too much on automation! I suggest you control this process manually: make sure to create a separate commit or thread for stylistic refactoring to keep your code review quality as high as possible.

The quality of your code review process can still be controlled with the help of such practices as anonymous polling after the code review (when the pull request is successfully closed), in which the reviewer and the tech lead would evaluate both the quality of the code and the quality of its review. And whether or not stylistic refactoring is singled out into a separate commit can be one of your quality metrics here, among others.

Discipline Issues at the Software Testing Stage

Now let’s move on to the testing stage and let’s see what discipline issues may pop up here. The most common one is lack of any information about testers and QA engineers in Jira. That’s typical of teams that try to save money by buying a few user accounts and not adding all responsible parties to Jira. On the other hand, some tasks just don’t get a “testing” status and remain uncovered.

My recommendation is to set up alerts and make sure all data is added to tasks in Jira to prevent unclarity as the project is halfway through.

Predictability Issues at the Testing Stage

I recommend that you leverage SLAs for the whole testing period. Non-compliance with SLA should result in certain penalties and should be communicated appropriately within the team.

Like in the code review case, your testing team may be overwhelmed with tasks, especially if the team isn’t dedicated to a particular project. Even if your testers cover several projects at a time, you still need to analyze each tester’s activity to identify any bottlenecks promptly.

If you have a complex test coverage pipeline, I suggest you set up metrics to measure a build time, a system rollout time and autotests. If you can’t do it for some reason, then you should spend a day with your testers every 2-3 months to review their individual pipeline and gather insights into tester performance and how to make their life easier.

Quality Issues at the Testing Stage

The core testing function is to prevent bugs at the production stage. For any team/tech lead, it’s crucial to know who of your testers does this job better than others and who is a slow performer. In this case, we need to estimate the “bugness” of your testers, i.e., the ratio of fixed tasks to the total number of testing tasks assigned to the tester.

If your dev and QA teams are “playing ping pong” all the time (i.e., the tester returns a task to the developer, and the developer throws it back to QA without making any fixes), you’re in trouble.

As a metric, you may count returned testing tasks as a percentage of all returned tasks. Or you can use Jira and Git data to analyze returned tasks and understand who blocks the process – developer or tester.

The methodology of Using Metrics Based On Objective Data

My recommendation is to automate your alerts system and deliver all alerts to the team via a bot integrated with popular messengers as well as the messenger of your team choice (e.g., WhatsApp, Viber, Telegram, etc.). At 8allocate, we used different communication and notification means such as emails and virtual dashboards, but nothing proved to be as effective as chatbots. You can either create one using open source and DIY tools or hire a professional chatbot making company to build a custom AI-based bot, depending on your budget, timeline, and general requirements.

As the practice shows, teams perceive bots better than a manager who tells them what to do and where they’re wrong. My recommendation is that you notify the accountable team member first and only send an alert to the whole team if the responsible person has ignored the system alert for 1 or 2 days.

Methodology #1

  1. Build and implement the chatbot
  2. Start with discipline issues
  3. Sign a team agreement, i.e., each team member gives a public consent for following the specified teamwork rules

Methodology #2

  1. Any process has an exception
  2. “No tracking” label – tag all tasks that are difficult to decompose or that take too long to complete with “No tracking” and make sure your chatbot takes it into consideration. By analyzing the pipeline of “No tracking” tasks, you, as a PM or tech lead, will be able to have a copter view of your team performance.

Methodology #3

  1. Use automation alongside manual control
  2. Assign a developer on duty to monitor processes

This is a rolling position that shuffles from person to person every week. This role implies that all test tasks are appropriately reviewed, and you, as a team lead, help the person on duty monitor issues by supervising them. This way, you’re one step closer to building a self-organized team.

To wrap it up, there’re 3 ways to use objective data to tackle issues in the course of the software development life cycle.

Gather data! Configure all processes so that you can collect as much data as possible. Even if you can’t make use of this data now, you’ll have a trump card in the future, and you’ll be able to leverage this data for retrospective analysis, etc.

Automate processes and set up alerts system. This will help both individual team members and the whole team to be responsive and eliminate a huge backlog.

Write A Comment

8allocate team will have your back

Don’t wait until someone else will benefit from your project ideas. Realize it now.