There’s sort of a myth that software development is an expensive black box that’s extremely hard to manage. Slow development progress, missed deadlines, and poor code quality are the three key issues that usually occur because a software development team has no standards of effective developer performance evaluation or doesn’t follow common software engineering rules.
When a software development team has neither proper tools nor methodology to measure developer performance, managers and team leads make a subjective evaluation, which, in turn, can lead to many mistakes.
When teamwork and internal workflows aren’t controlled and managed properly, team members start ignoring rules and lie down on the job, which results in poor morale and productivity.
Using data artifacts can be an excellent solution to these issues, as they resort to objective data rather than subjective estimates.
Objective and Measurable Data Sources
When it comes to measurable and objective data, let’s see first where we can retrieve it from:
- Git that contains your source code information;
- Jira or an alternative task tracking tool;
- Github, Bitbucket, Gitlab, and similar platforms that contain code review data.
Besides, there’s another cool mechanism of gathering data which is a collection of subjective estimates.
Unstructured data can be dirty and cause you the pain in the neck. And there’s no way to avoid it but filter, process, and visualize. If your team has never collected any data artifacts before, you may simply find no data in the above sources to work with, which is a real bummer!
To avoid such a situation, we suggest that you implement Measurable Agile within your team from the very beginning and start collecting and keeping data artifacts in one place.
Let me tell you a horror story from my PM’s experience.
Once upon a time, I had a client who wanted to improve the quality of their existing digital solution. We were a team of 15 specialists who had to fix 30-40 bugs a week from production backlogs. When we explored the reasons, we realized that 30% of all tasks were never submitted for testing. At first, we thought it was an error, but after double-checking, it was confirmed that 30% of features were never tested at the development stage. Everything started with a small infrastructure issue that prevented 1-2 tasks per iteration from getting into a QA backlog. The issue wasn’t fixed, and everyone seemed to have forgotten about it, which had eventually resulted in 30% of the code not being covered with tests. Globally, it caused a lot more issues at the post-production stage.
As such, the first critical metric for any software development process is the availability of collectible and traceable data artifacts.
Sometimes, for the sake of measurability, it is possible to sacrifice some of the Agile principles like, for example, oral over written communication.
The Due Date practice, which we have implemented in several client-tailored teams to improve predictability, has proven to be highly efficient. The bottom line is that whenever a developer assigns “in progress” status to a task, they must set a due date for the feature to be either released or ready to be released. This practice teaches the developer to be a micro project manager of their own tasks, i.e., to take into account external dependencies and to understand that the feature is ready only when the client can use its result.
In case the due date has been missed, the developer needs to go to Jira and put a new due date and leave comments as to why the deadline wasn’t met. Although it may seem like a bureaucracy, in fact, after two weeks of such practice, you can export all such comments from Jira with the help of a simple script and conduct a retrospective review with your team. In this way, you can get a lot of insights as to why deadlines are being missed. It can facilitate your informed decision making and significantly improve your team performance and productivity.
Issue-Based Approach
Processes often don’t work not because people maliciously violate them, but because the developer and manager lack control and memory to comply with them all. By tracking violations of the regulations, we can automatically remind people what to do and get automatic controls.
To understand what processes and practices you need to implement, you need to know how exactly your client’s business will benefit from your development.
Any software solution should possess the following 3 features to be considered a good one:
- Fast time to market;
- Appropriate quality (it shouldn’t be perfect, but should be quick, responsive and intuitive);
- No feature creep.
As such, what really matters in software development is speed, quality, and predictability.
Ideally, we need to build the following workflow:
- Automatic data collection;
- Using this data to build metrics;
- Using metrics to find problems; and
- Report problems directly to the developer, team, or team lead.
Such a workflow will allow everyone on the team to commit promptly and cope with the issues detected.
Now let’s take a look at the 4 stages of software features life cycle and see where we can expect bumps in each stage:
- planning,
- development,
- code review, and
- testing
Planning Stage Issues and Resolutions
Discipline Issues
Discipline issue #1: there’s a planning meeting, but not all stakeholders attend it.
Your team complains that testers don’t test the product appropriately. It turns out that testers were never invited to planning meetings and are not on the same page with developers. You need to solve this issue before rushing to solve metrics or communication issues.
Bottom line: make sure you involve every single stakeholder to the planning meeting and don’t leave anyone out!
Discipline issue #2: Too many high-priority tasks in the backlog.
Another crucial issue is that you discuss priorities verbally and don’t reflect them in data. As a result, you find by the end of the iteration that your highest priority tasks were omitted and your team focused on less important features.
Bottom line: make sure your team is aware of all priorities in iteration, and if your backlog has 90% of tasks marked as a high priority, the chance is slim your real priorities have been approached.
At 8allocate, on our client projects, we tend to have the following distribution of tasks: 20% are high-priority tasks (should be 100% released), 60% – medium priority, and 20% – low priority (can be released later). Based on our experience, it’s the most optimal distribution of tasks by priority.
Discipline issue #3: no data is being collected.
Either your project tasks haven’t been scoped or tasks are not descriptive at all. There’re also cases when bugs are registered as tasks, and vice versa. Or your tech debt related jobs aren’t assigned the proper value.
Bottom line: as a team lead or CTO responsible for managing numerous teams within an organization, you need to review backlogs every two months to make sure each task is assigned a proper type: bugs are bugs, stories are stories, tech debt is tech debt, etc. This can hardly be automated and can only be done manually to ensure maximum efficiency.
Predictability Issues
Predictability issue #1: wrong project estimations and deadlines.
Unfortunately, there’s no silver bullet to solve this issue. The only way out is to continuously educate your team on the principles of clean code driven development, the chosen software dev methodology, and more. The better you motivate your staff to meet deadlines and estimate more accurately, the better the outcome! Good news is that the education process can be fully automated.
The first thing that can be done is to sort problematic tasks with a high estimation of execution. We implement an SLA and make sure all tasks are adequately decomposed. We recommend a maximum of two days’ execution limit to start with, and then we can move on to the one-day limit.
The next point is to make it easier to collect artifacts that can be used for training and to raise team understanding of why the evaluation error occurred. We recommend using the Due Date practice for this purpose.
Another useful metric is code churn that tells you the rate at which your code evolves and enables you to understand the gaps and why they occurred.
Predictability issue #2: the sprint was planned, but too many changes were made to it, so the team focused on different goals.
To solve this issue, we recommend that you calculate the basic metrics: initial vs. changed sprint scope, change of priorities, the structure of changes. Then you can estimate the approximate number of bugs and changes per iteration. Based on this, you can improve your planning predictability.
Quality Issues
Quality issue: no time to eliminate technical debt
As long as you mark all technical debt related tasks in Jira as “technical debt,” the issue remains pretty manageable. We can calculate what quota of time allocation was given to the team for tech debts during the quarter. If we agreed with the client on 20%, but actually spent only 10% on this task, it should be taken into account and more time should be devoted to the job during the next quarter.
Development Stage Issues and Resolutions
Discipline issues
Sometimes, we find ourselves in situations where we don’t know if our developers do anything or not. It can be determined quite easily with 2 metrics:
- Frequency of commits (at least once daily), and
- Active tasks in Jira (at least one)
On the other hand, the second issue that can severely affect your project team and its productivity is occupational burnout. This can be determined, again, by reviewing the developer’s activity in Jira.
Also, Git rules can be broken, which will have detrimental effects on your teamwork results. The first rule of thumb that can help is to add task prefixes from a tracker tool to commit messages, as in this case we’ll be able to connect the task to code, and have better transparency. You need to set up alerts and git hooks to make it work.
Predictability issues
It’s quite dangerous for your project predictability to have one software engineer accountable for too many tasks at the same time. It’s recommended to check all parallel tasks executed by a single developer by reviewing the code, not Jira, as Jira doesn’t always display relevant data. The more jobs we are assigned to, the higher the chance something will go wrong and the lower the predictability.
Another common predictability issue at the development stage is that the developer has a lot of problems, but doesn’t escalate them to the tech lead.
These issues can be identified with a data-based approach. Let’s say your developer had little coding activity yesterday. It doesn’t mean that he has any problems, but you, as a team lead, can pay attention to it and ask him questions to have a detailed picture of what’s going on. What if he’s stuck behind a task and badly needs your help? Asking will never hurt, but will help avoid missed deadlines and poor product quality.
Quality issues
The development has a direct impact on quality. The question is how to understand which developer has the most significant impact on the quality decrease.
We suggest that you do so as follows. You can calculate the developer’s “bugness” criterion:
- take all the tasks that have been assigned to him/her in the tracker for three months;
- find among all tasks those marked as “bugs”;
- review the code of these bugs.
This will help you calculate the ratio of tasks in which defects were found after the release to all the tasks performed by the developer–this will be your “bugness” criterion.
The next problem that can be related to quality at the stage of development is that we write code that is difficult to support. I won’t go into detail here, as I wrote a separate article on this. There is a metric called Legacy Refactoring that just shows how much time is spent on embedding a new code into the existing one, how much old code is removed and changed when writing a new one.
Probably one of the most essential criteria when estimating quality at the stage of development is SLA control for high-priority bugs.
The last thing you often have to deal with is the lack of autotests. First, they should be written. Secondly, we need to monitor that the coverage is at a certain level and does not fall below the threshold. Many people write autotests but forget to watch the coverage. Don’t make this mistake!
To be continued…
In the next article, I’ll address how to use objective data to tackle issues at the code review and testing stages of software product engineering.