{
I didn't manage a blog post during TechEd but I'll take a stab at a few things that have swilled in my mind since then. The first thing I've been working on is understanding NDepend, the tool for static analysis. I went to a "Birds of a Feather" spearheaded by some folks from Corillian and had to keep my mouth shut tight so as not to look a fool. Luckily many of the folks there were like me: they knew about static analysis as a concept but were trying to figure out how they could make good use of it.
It will be quite some time before we have static analysis as a part of our build process but I'm most interested in using a tool like NDepend to take the emotion out of code reviews. That is to say that while I do love a vigorous discussion on style and preference, there are concrete measures one can look at objectively to evaluate the well being of software design.
While I'm shaky on the exact meaning of all the metrics (it was suggested to run repeatedly looking for trends) I ran it against a project that has pretty much taken my thirty first year on this earth. For a while that's been on my "to do" list but I think there's always a bit of hesitancy on my part when I'm about to be brutally honest with myself; I designed this software and wrote quite a bit of the code.
The results weren't great, but they weren't horrible. When I used it on the libraries by themselves, the visualizations of the dependencies seemed clean and tidy, and many of the metrics weren't too badly outside of some of the pointers in the cheat sheet we got.
However.
There are some obvious weaknesses that came to light. First and foremost, we were solidly in the zone of instability for most of our assemblies. There are two things that were suspicions now confirmed: first, we didn't have a very formal design process. I need to get better at perceiving my job at an architect level versus as a coder. The second is that we rushed. The rush was not just a schedule thing, it can also be attributed to our short release cycles. The agile folks recommend these, but it should be balanced with a period of silence at the beginning when overall design decisions are being made. The final item, which NDepend would have helped us with in a continuous integration cycle, was showing unused code. After a year's worth of work it's hard to look at so much and except oneself to clean it up, but as a weekly task it would be an easy way to keep things healthy and tight.
Final note: running NDepend is ridiculously easy. The hardest thing besides looking at metrics and trying to understand them is having the courage to look objectively at what you've done.
}
I didn't manage a blog post during TechEd but I'll take a stab at a few things that have swilled in my mind since then. The first thing I've been working on is understanding NDepend, the tool for static analysis. I went to a "Birds of a Feather" spearheaded by some folks from Corillian and had to keep my mouth shut tight so as not to look a fool. Luckily many of the folks there were like me: they knew about static analysis as a concept but were trying to figure out how they could make good use of it.
It will be quite some time before we have static analysis as a part of our build process but I'm most interested in using a tool like NDepend to take the emotion out of code reviews. That is to say that while I do love a vigorous discussion on style and preference, there are concrete measures one can look at objectively to evaluate the well being of software design.
While I'm shaky on the exact meaning of all the metrics (it was suggested to run repeatedly looking for trends) I ran it against a project that has pretty much taken my thirty first year on this earth. For a while that's been on my "to do" list but I think there's always a bit of hesitancy on my part when I'm about to be brutally honest with myself; I designed this software and wrote quite a bit of the code.
The results weren't great, but they weren't horrible. When I used it on the libraries by themselves, the visualizations of the dependencies seemed clean and tidy, and many of the metrics weren't too badly outside of some of the pointers in the cheat sheet we got.
However.
There are some obvious weaknesses that came to light. First and foremost, we were solidly in the zone of instability for most of our assemblies. There are two things that were suspicions now confirmed: first, we didn't have a very formal design process. I need to get better at perceiving my job at an architect level versus as a coder. The second is that we rushed. The rush was not just a schedule thing, it can also be attributed to our short release cycles. The agile folks recommend these, but it should be balanced with a period of silence at the beginning when overall design decisions are being made. The final item, which NDepend would have helped us with in a continuous integration cycle, was showing unused code. After a year's worth of work it's hard to look at so much and except oneself to clean it up, but as a weekly task it would be an easy way to keep things healthy and tight.
Final note: running NDepend is ridiculously easy. The hardest thing besides looking at metrics and trying to understand them is having the courage to look objectively at what you've done.
}