Ok, I’ll come clean… this is my pet topic in software development, and one day my tag cloud will agree with me ;^)
To me the “Holy Grail” of software development is to have languages, tools and techniques that allow us to easily discover (and transcribe) the intent of software i.e. what the software is actually trying to *achieve*, not how it does it.
Why is this so important? Well, without understanding the intent of a piece of code, a developer has very little chance of successfully adding new features or fixing bugs. Changing code without understanding the intent (usually by just copying and pasting existing code) is just MSMD (“Monkey See, Monkey Do”) programming and even when combined with a process such as TDD, such an approach is going to be painfully slow and error-prone. You might well get your next test to pass, but the chances are it will be the wrong test!
Now I know that these ideas are not new, the JetBrains paper on Language Oriented Programming was written back in 2004, and Charles Simonyi’s company Intentional Software was started even earlier in 2002, but it seems to me that the mainstream world of software development is also heading this way with the rise of more declarative languages , APIs and DSLs (which I would say are excruciatingly trendy right now ;^)
Developers shouldn’t think of writing tests as like writing code – they should think of it as *exactly* the same as writing code. IMHO, the “code” is has 2 parts, the implementation and the tests, and each needs as much care and attention as the other… Anyhoo…
A test remarkably similar to the following cropped up recently:-
self.assertRaise(SomeError, self.foo(x, y, z).blargle)
Now, to paraphrase the legendary Mr Clough – “It’s not the worst test I’ve ever seen, but it’s in the top 1” ;^)
How can such a small test be so bad? Well, it manages to pack a couple of critical errors into a single line (and that’s some going I have to say ;^):-
1) It is not clear whether it is the method call or the attribute access (or both) that is expected to raise the exception.
2) The implied API is clunky at best. If the reason for calling the method is to get hold of the “blargle” then just return the “blargle”! The only thing that this test makes clear is that the API is not clear!
If I was a betting man, I would bet that this test was written after the implementation, which might excuse the API weirdness (it could be the start of refactoring a legacy API), but not the lack of clarity…
The API vs DSL debate is an interesting one. IMHO, all good software is made up of layers, with each layer representing a domain. The outermost (application) domain might model a shipping company and hence any API/DSL at this layer should easily describe how to manipulate ships, cargos, routes etc. The innermost layer(s) might model web services or relational databases etc and so any API/DSL at these layers must model the low-level concepts that a developer would expect of these technologies. The key point is that at each layer the API/DSL should allow the user (be it, an application domain expert, or a hacker-geek dude!) to express the intent of what they want to do as easily as possible.
Now, my first guess is that if I was going to provide a DSL, I would probably only provide it at the application layer, and I would just create (hopefully) nice, clean APIs for the other layers (and of course, the DSL would really just be a thin wrapper around a nice clean application level API!)…
Of course, you could go DSL crazy and create one for each layer, although this seems like overkill to me, and brings up some pragmatic issues (which, actually apply even to a single DSL architecture):-
1) The ability of the user to learn a new language (even one targeted to the domain that they are in). This would obviously require the usual array of tutorials and reference manuals etc.
Currently when I code (I use TDD) I write tests in the same language as the shippable code. Obviously, every DSL can’t have extensions for testing, so would we use a low-level language or have a DSL for testing?