Wednesday, March 30, 2005

Example of a Pivot Point

So here's an example:

setMeasurement("Gas Equivalent", 10, x, y ,z);
setMeasurement("Production', 5, a,b,c);


We have lots of these in hundreds of tests. We had an AHA moment, where GasEquivalent was no longer on equal footing with production, it is subservient.

What we really want is something like: (Keep the goal in mind!)

setGasEquivalent("Production", 10)
setMeasurement("Production", 5,a,b,c)


If you were to make a change in the business logic and then in your tests. It would take several days before you could check in. On the otherhand, if you have a pivot point, you can keep checking in and do the work in less time.

The pivot point in this refactoring is creating a method:

setMeasurementX(name, value, x, y, z) {
  setMeasurement(name, value, x, y, z)
}


So, we now change all the tests - with a single character.

setMeasurementX("Gas Equivalent", 10, x, y ,z);
setMeasurement("Production", 5, a,b,c);


You haven't changed the contract (sometimes you need to add a parameter or something, do as LITTLE AS POSSIBLE)

You can check in lots as you make the change - remember all your test will continue to pass.

Make the change in the business logic (remember to find a pivot point in the business logic if it is a bug change down there too...).

All your tests should fail.

Now make this change in setMeasurementX

setMeasurementX(name, value, x, y, z) {
  setGasEquivalent("Production", value)
}


All your tests will pass

Inline setMeasurementX.

setMeasurementX was your pivot point - reducing a large problem to a small change.

Pivot Points

In the best large scale refactorings I've done, I've found pivot points.

Often you must change an API or how data works in many tests or places. If you change the behaviour and then the tests, it can take a long time to do the redesign by refactoring. In this case you need to find a pivot point.

A pivot point represents a small bit of code that allows you to change one behaviour to another in a few lines of code.

Basic steps are:
1) You refactor all your code to the pivot point. (You preserve old behaviour - all your tests pass) You should be able to do this incrementally, checking on often.
2) You make your underlying change. - all your tests should now fail using the old implementation.
3) You change the pivot point to produce the new behaviour.

Tada, all your tests pass.

By introducing one or more pivot points you can change behaviour incrementally.

The blog I'll write next shows how this works.

A big piece of the puzzle...

Good things come when you meet people who went to conferences so you didn't have to. Thanks Jenitta.

http://www.enthiosys.com has "twelve ways" (http://www.enthiosys.com/twelveways.php) to improve your prioritization and story acquisition process. I've seen descriptions on all twelve ways. It represents some good ways to think about requirements, that are not so technical.

Stuff to try on the how to educate your customer front. Looks really good.

Monday, March 28, 2005

Relational Databases ripe for the picking...

1) Minimize the number of joins you make
2) Denormalize your database to increase performance
3) OO and Rel have an impedence mismatch
4) Oracle/MSSQL require special expertise to make it perform on even a moderate project
5) Oracle is impossible to install
6) Oracle is even harder to uninstall
7) SQL feels like programming in assembly language these days
8) Uncountable runtime parameters to peak
9) I believe there is an Oracle cult evolving...

Much of this is due to the fact that databases cater to the "enterprise" market. Performance for large data sets demands the most of the machines. You must be able to microscopically control the behaviour of the database when you have enormously large databases.

How come I need to run the Oracle Analyzer on my SQL (which then fixes it) and tells me the indexes to build? That really seems like the role of a compiler. This technology existed forever ago.

Why is a bit bucket so hard to configure, install, and understand on a smaller project?

I think, someone will come along with a better metaphor than a relational database, (not OO) that will allow rapid integration into a data warehouse, meshes nicely with objects and is easy to report on.

I wonder if Oracle will buy them and bury them or just outright crush them.

Thursday, March 24, 2005

Good blog

Really nice summary that automated testing is more than a green bar...

http://www.kohl.ca/blog/archives/000084.html

Testing is about the results. Sometimes a green bar is enough to tell you. Sometimes it provides you with information about change, it is up to the tester to interpret the results as "change" or "bug".

We have both on our project. Junit green bars fro developers. Automated tests that mark up spreadsheet values for testers (Red for different, white for the same).

The pattern of Red and White often reveal where the change or problem has occurred and allow the testers to quickly interact with developers and business people to determine if it is a bug or a change in the system.

We should probably get Jonathan out to visit our project. We can share experiences.

Recipes vs Principles

Joseph, Colin and I spend a lot of time talking about what works and what doesn't work on the project. We're constantly looking for ways to improve.

What gets us going, trying something new? Someones experience. Not a philosophy, or a deep underlying principle, it is a well described experience.

Why do you think Refactoring, Patterns of Enterprise Application Architecture, Design Patterns are great books revered by all level of developers. They describe how things are done from experience.

Other books, such as Agile Software Development describe the philosophy, principles, or interpretation of why things occur. They help you think about things at an abstract level - so you can adapt. In many ways thye require me to be way too smart. I need to be able to take the principle, figure out a concrete way to do it, and then figure out how it fits in my particular project.

It is far easier to take something that someone has already figured out and adapt it to my situation. XP was an adaptation of the way people really worked. The adaptation was taking things to an extreme.

I am way better at adapting other peoples experiences than deriving new processes from principles.

Tuesday, March 22, 2005

Agile expects Demos

If you're on a project that is more than a few months, make sure you do at least monthly demos. This will save your bacon. It buys you credibility.

We always produce production quality code - it must at least be demoable.

Who should you present to? Not your onsite customer. They are probably not the decision maker. Make sure the decision makers see them demo. They need to see the value of what you are developing.

Who should do the presentation? Your onsite customer.

When should you present? At least every month.

This has several ramifications:

1) It ensures decision makers are involved. Gantt charts make people feel comfortable, but they fail to convince. Demos provide concrete evidence of the small incremental steps of agile. It gives you credibility when you want to talk about problems.

2) It motivates production quality code. There is nothing better to motivate a team than to have a demo where things go badly. Usually the next demo goes quite well.

3) The onsite customer doing the demo avoids the smoke and mirrors temptations and expectations. An onsite customer won't stand for it. The decision makers will have more trust it isn't smoke and mirrors if their own people are presenting.

Project progress...

We have seen several things as of late:

  • Our developers are learning new things like large refactoring and object responsibility
  • Our customers are simplifying their prioritization by paying attention to real data rather than the contract.
  • Our developers seem quite pleased to be working on the project

A little more Agile every day.

A blast from the past

Dave Thomas (big Dave from OTI) showed up on the site the other day and made contact with me.

It is nice that he remembers me. He taught me at Carleton, and then showed me at OTI, what computer science was all about. I gave up a career in usability to become more technically focused as a result. It has been a blast ever since.

I continue to strive to make development environments as focused and productive as he did at OTI. It was an awesome and inspiring experience. I hope to share at least a part of that experience with the people I lead today. Thanks Dave.

Monday, March 21, 2005

Agile lets you solve real business problems

On most projects your data is ported at the end, and you hope it all works, that everything you implemented comes together.

Show your customers how to use their current data to prioritize stories. To do this, you must understand what the system is going to be used for. You'll need a business architect who can explain what the system is trying to accomplish.

1) At the beginning of the project ask your customers for some real data - something that is as simple as they can think of that they use frequently. This is your first target. You need to help them identify these. Use your business architect.

2) Ask them what they do with that data. Include administration and use. These form your gross stories. Break those stories down even further - into small doable steps.

3) Make a plan that may span iterations for getting these real processes running. Use rough guesses for complexity.

4) Rejuggle the plan to make it better. It's only a plan.

5) Ask them for the algorithms at this point, if it is complex (involve you business architect and whatever resources necessary of the algorithms are hard)

6) Ask your users for examples. Show how you will convert them into tests.

You have shown them how to take real problems and break them down into small doable steps and then extract tests to verify them. This is your first planning session.

Agile acknowledges mistakes

The agile manifesto states:

  • Individuals and interactions over processes and tools
  • Working software over comprehensive documentation
  • Customer collaboration over contract negotiation
  • Responding to change over following a plan
I don't think this actually identifies the power of agile. I believe if pressed people would argue this is true of all the processes. I really don't think people are that dense.

One of the things agile does that other prcesses don't do is acknowledge mistakes. It has a process (iterations) for discovering and fixing mistakes.

In the first months of a project you need to teach your customer mistakes are OK. Count requirements mistakes, development mistakes, testing mistakes, all the mistakes you can think of. Show how they are dealt with and how it is not a big deal.

Why doesn't XP help the customer?

I said before XP doesn't help your customer. Wny not? Because it doesn't help your customer become agile.

Interesting XP is hard to practice for developers. It does not make them agile. Agile for developers is something that takes mentoring. It takes time. Several months for a team to lose it olds habits, and begin to understand how to knit together the new ones. To get proficient it takes years.

How come then, we expect a customer to be able to prioritize their requirements, and break them into smaller stories and predict which ones they might not need because they'll fall off the end of the list? And best of all tell them "trust us" you won't need 40% of what you're asking for, this study over here shows it.

Actually, I have the same problem with developers. However, with developers I have a set of steps I can make them do, that I can show them. Not an abstract concept. Look, I can say, don't you feel more confident about changing your code after you write your tests? Does your code look better when you learn to refactor a design into existance instead of planning? It all makes sense once they do it. They may not agree with it, but it makes sense - once they've done the steps for a while.

So how about the customer what steps do I give them? So that they can play the game, understand the interactions and become proficient. I guess I'll be exploring that in my next few blogs.

Wednesday, March 09, 2005

Project improvements

Couple of interesting project improvements that show our customers dedication to making this a successful project.

1) The progress of the project will be measured in terms of real properties (real data!) that we work with.

2) The customers will focus their requirements efforts around real properties (actual need rather than perceived need)

3) The customers will move into the pit with developers (not spend most of their time in a separate meeting area).

Excellent.

Saturday, March 05, 2005

If you plan everything and defer deployment...

.. all will be OK.

Oh my god. We had a Webfocus VP in. He was talking to our customers, supposed to be explaining report design and helping them with report design. He shot out a few truisms, suggested one of our senior developers was misguided, and told our customers that if you defer deployment and plan everything you'll be in great shape.

He also informed us that the best strategy for handling problems where Webfocus didn;t have the features we needed, was to wait for Webfocus to implement them. Yeah, that'll work

He was suprised that it was 99 steps to install webfocus. He was suprised our IT team had trouble installing Webfocus.

He was totally unprepared for our project. Afterwards I had harsh words for him.

Webfocus is living in a dreamworld, with the attitudes I've seen they'll probably self destruct in the near future. They aren't focussed on solving the problem, just explaining why your competent development team should get smarter and understand their product.

Consistency on a large team is hard...

One of the problems that faces large teams is Abundant Mutation. We have this in spades on our team. A couple of developers will do something better (or differently) than it was done before and not backward apply it to all the other cases.

I would rather have some code that is a little awkward to use than have multiple different types of the same code. If it is too awkward, I would prefer to make the change everywhere, rather than a single place.

A refactoring class

We have a problem area in our application - the workflow facade. It began as a transaction script (our workflow is beyond simple in terms of branching etc) and has evolved into an unholy procedural mess.

We've played with it once or twice trying to find the right refactoring steps to turn it into useful objects. A couple of days ago, trying again (for a timeboxed couple of hours), we stubled across a simple refactoring that turns this ugly spaghetti code into nice OO code. Best of all the refactoring is fairly simple 8-10 steps and will be repeated 20-30 times.

This will make an ideal platform for teaching people the more interesting, chained together, refactorings.

A good team

Just need to remind myself that I'm working on a good team.

I often get upset that the code isn't perfect, the tests don't give 100% coverage, the requirements are sometime unclear, etc.

However, this isn't important. What IS important is that the team produces new things every day, that the team cleans up the code a little more every day, that the developers learn something new everyday, that the tests get better every day. This is impressive, for a team with 22 developers/designers/architects on it.

Tuesday, March 01, 2005

Small Successes in pseudo FIT

One of the problems we have in our application is writing integration tests. Our customers business is basically building an application that takes 30 (relative inaccurate) numbers, and through various convolutions turns them into thousands of very precise numbers.

We currently break our development tests into process chunks. We "chop" the business. For each stage in the process we create a known set of inputs and test against a known set of outputs. (Keeping the numbers to a few dozen). This helps with defect triangulation and keeping the tests maintainable. We couldn't do test first development without it.

However, we may miss linking steps together (despite our mock workflow tests). We may not even understand all the processes involved. And despite our best efforts the process will not work. Worse, when we refactor, we may break something that used to work.

Our pseudo FIT framework serves as an automated way of checking. It allows the QA people to take a snapshot of the VIEWS of the database (basically our facade for reporting) and create a simple script to execute the events. This allows us to see change in the system, catch bugs earlier and more automatically. It has caught a number of them already.

I feel a little safer.