Pears of Wisdom - Codermetrics Notes

Description

Notes from a book about how to think about measuring the most impactful aspects of a software engineer.
Alex Poiry
Note by Alex Poiry, updated more than 1 year ago
Alex Poiry
Created by Alex Poiry about 6 years ago
8
0

Resource summary

Page 1

Introduction Software teams are successful because they have the right variety of developers. Some patterns for success and failure are different than what you'd assume.    E.g., Successful teams have someone who just does the little, detail oriented, or un-sexy things. We need a quantitative and analytical approach to understanding the skills and work of individual coders to improve software teams. The metrics we commonly deal with don’t provide enough insight to answer many key questions that we have, such as: How well is our software team succeeding? How are individual team members contributing to the team’s success? What capabilities can be improved to achieve greater success? Analogy: When playing sports, not everyone is good at the same things, we need diversity, we need to play moneyball for software development. The key steps followed in this new approach are: Find a way to measure the differences between winning and losing teams. Find a way to measure the contributions of individual players to their teams. Determine key player characteristics that are highly correlated with winning or losing.

Page 2

Measuring What Coders Do 3 Reasons to Gather Metrics Help track and understand what is happening both historically and in good, consistent detail Help communicate about hat has happened, see trends, etc. Help indicate what needs to improve Metrics are not grades or ranks.  Metrics may contribute to a grade or rank but alone they are just useful sets of data points. The most meaningful and useful metrics connect and relate individuals and teams to organizational goals.  To achieve this org goals must be defined and the team needs to understand how they're measured.  This must then relate to measurable activities from the individuals and teams. Metrics can be good or bad.  To evaluate the metric ask things like: Can easily describe the metric and have people understand it? Can the metric show me something I don't already know? Does the metric clearly relate to a goal we care about? If the answer is no, the metric probably needs to be re-worked.  If the answer becomes no, over time, rework the metric or discard it as appropriate. Good metrics don't just track activity, they relate to achievement and outcomes. Good metrics are not always obvious.  It is a useful practice to challenge your assumptions.  In fact, you can use existing metrics to validate your assumptions OR create metrics to validate them. As you gather metrics, look for patterns.  Remember, though, that not all patterns are simple. As you look at patterns over time, try to distinguish between outliers and anomalies.  Anomalies are values outside the norm and generally are not repeatable.  Outliers are widely separated from the main cluster, but may be repeatable and thus valuable. If there is a clear explanation for a one-time occurrence, the result is an anomaly. If there is no clear explanation for what appears to be a one-time occurrence, the result is an outlier and should be examined more closely and watched over time to determine if it is meaningful or part of a pattern. The “local maximum” and “local minimum” values are worth studying to see if they can be explained.  If there are common events that appear during peaks or valleys they can be recreated or eliminated as required.  This applies to sustained peaks and valleys as well. Occasionally, a single person will contribute to or draw down those around them.  There is also a gestalts theory at play where sometimes a collaboration among a few specific engineers may result in better than or worse then expected performance. Metrics can potentially tell you a lot but the goal is to find patterns of repeatable success. To that end, you need to have an idea of what success looks like for your team.  Once you know what success looks like, you can start to look at teams that are consistently successful and compare them to teams that are occasionally successful then try to study the positive deviance and hopefully find useful patterns.  Additionally, you may be, at a minimum, able to find patterns that contribute to failure. Be aware that metrics will never fully explain something as complex as a human system for developing complex software. You want to think broadly and contemplate new and interesting data elements that could make for more meaningful metrics. You can also think about how to identify data that would measure how coders and software teams are doing relative to team and organization goals.

Show full summary Hide full summary

Similar

Software Processes
Nurul Aiman Abdu
Historical Development of Computer Languages
Shannon Anderson-Rush
Useful String Methods
Shannon Anderson-Rush
Software testing strategies: Summary
harrymt
Polymer 2.0 - Custom Element - Register Element
Ravi Upadhyay
Software Application
Dim Ah
Code Challenge Flow Chart
Charlotte Hilton
Flvs foundations of programming dba 2
mariaha vassar
Pears of Wisdom - Groovy Fundamentals
Alex Poiry