The API vs DSL debate is an interesting one. IMHO, all good software is made up of layers, with each layer representing a domain. The outermost (application) domain might model a shipping company and hence any API/DSL at this layer should easily describe how to manipulate ships, cargos, routes etc. The innermost layer(s) might model web services or relational databases etc and so any API/DSL at these layers must model the low-level concepts that a developer would expect of these technologies. The key point is that at each layer the API/DSL should allow the user (be it, an application domain expert, or a hacker-geek dude!) to express the intent of what they want to do as easily as possible.
Now, my first guess is that if I was going to provide a DSL, I would probably only provide it at the application layer, and I would just create (hopefully) nice, clean APIs for the other layers (and of course, the DSL would really just be a thin wrapper around a nice clean application level API!)…
Of course, you could go DSL crazy and create one for each layer, although this seems like overkill to me, and brings up some pragmatic issues (which, actually apply even to a single DSL architecture):-
1) The ability of the user to learn a new language (even one targeted to the domain that they are in). This would obviously require the usual array of tutorials and reference manuals etc.
Currently when I code (I use TDD) I write tests in the same language as the shippable code. Obviously, every DSL can’t have extensions for testing, so would we use a low-level language or have a DSL for testing?
And so it was that the student asked the guru, and the guru said:-
“Of course not. You only need to write tests if you need to refactor your code.”
“Ah, I see”, replied the student, “and will I always need to refactor my code?”
“Of course not. You only need to refactor your code if you need to add features or fix bugs.”
If a function/method uses a block, why would I ever use the implicit ‘yield’ syntax instead of explicitly pass the block as the final ampersand argument?
Why would I do:-
def foo(x, &block)
Doesn’t the implicit syntax hide the fact that the function/method allows a block from the caller, meaning that the only way to discover it is to read the code?!? What am I missing?
I’ve come across Django on a couple of sizeable projects now, and I’ve noticed that the teams involved put all of their views in ‘views.py’ which can end up being as long as 1000 lines+ (and then some ;^). Now, I know that is the Django way, but my gut instinct is that:-
a) it cuts down on potential re-use
b) on mutli-developer teams it increases the chances of merge conflicts
Not to mention that any Python module over a couple of hundred lines long sets off my nervous tick ;^)
As I’ve mentioned previously, I like the idea of attaching metadata to my information so that I can view it in many weird and wonderful ways, and not just how a random developer decreed it should be viewed (especially scary, is the infamous “I’ve got a tree control and I’m gonna use it” kind of developer!).
Now, in Web 2.0-Ville that (for now) means tags, and so I did the dutiful thing and signed up to delicious and I was happier than a happy thing from Happy-on-Sea, going about adding all of my bookmarks to it, when I hit the ‘restricted bookmark’ issue whilst trying to delicious-ize (whats the right verb?) a link to a directory on a local disk. Now I understand that because delicious is at its heart a tool for sharing, I don’t want to expose private information, but it is still a little disappointing. It means I can’t access my information in a unified way simply because of where it is stored, even though in this case, all I wanted to bookmark was a read-only document containing train timetables and I wanted it tagged with the rest of my #admin stuff!
Maybe, in the war to organise my information, there really are only two types – private and shared (I won’t say “public” since shared implies a finer grain of control)…
We are still a ways off yet, of course, since I would really like to attach semantics to my tags and have those semantics shared with everybody else, but we’ll get there I’m sure…
Spot the linking theme between these two technologies? You’re right – they both start with “Google”… the question is, will Google notice and integrate them? ;^)
You may have noticed that this blog doesn’t use “Categories” but instead those pesky little Web 2.0 critters called “tags“. I might get tired of them soon, but I have long wanted to be able to attach metadata to, well, pretty much all of my personal information (e-mails, documents, songs, blog posts), and so for the time being, I’m using them all over the place and seeing how it works out… I’ve been using delicious and twitter for a little while now and I’m pretty happy with them, although I can’t help but think that some information is easier for me to retrieve with a well-known structure as opposed to via a tag-cloud…
Now, what I really want is to be able to share my tags across all of my information stores and to be able to search them all through a single unified interface…
Hmmm, because of the fact that it is syntactically legal to call methods/functions with no parentheses (i.e foo is the same as foo()) I can’t define a function and then inspect it as I would expect, e.g.
return x * 2
If I do this in “irb” and then enter “double” it trys to invoke the function, but I just want to see the bare-nekkid function as an object ;^(
Why can’t I define a class inside a method in Ruby? Now, I must admit, its not something that you want to do a every day, but I’m sure there is a meta-programming case to be be made for it, and even if not, it comes in very handy when writing test cases in Python…
And the first one is Frank Spencer… I’ll be here all week ;^)
I have to say, that the geek side of me likes much of what I see in Ruby so far… objects everywhere, blocks, the ability add to “re-open” class definitions etc. but the team developer side of me, however, is a bit more apprehensive.
My pet “thing” in software is finding tools and techniques to make the intent of programs clear. At the simplest syntactic level this comes down to what is usually referred to as “readability”. Martin Fowler famously said “Any fool can write code that a computer can understand. Good programmers write code that humans can understand”, and it seems to me that Ruby’s terseness is potentially more help to the computer ;^)
I think it comes down to a question of whether you think it is the transcription (i.e. typing) or the comprehension (i.e. reading and understanding) of code that is the tricky part. To add new features and to fix bugs a developer has to first determine the intent of the code and my hunch is that that takes somewhat longer than the typing required to make it happen.
So, that said, does anybody really think that we actually save time by typing ‘to_a’ or ‘to_s’ instead of ‘to_array’ and ‘to_string’?