The “real-time web” is a popular topic right now. My WebHooks initiative is both riding on this success and helping make it a reality. One sector of this trend is about notifications. Real-time notifications to you about events you care about.

For a long time we’ve had helper apps like the Google Notifier and more recently the Facebook Desktop Notifications app that bring events from the web to your desktop. Twitter has created a whole ecosystem of clients that not only let you actively check Twitter, but passively get updates from Twitter.

Simultaneously, we’ve had a bunch of systems like Growl emerge that give you a consistent, well-designed and customizable system for local applications to give you notifications. While your IM client is in the background, it can tell you somebody IM’d you and what they said in an unobtrusive way. It integrates with email applications to tell you of new emails. It gives any application developer a nice way to present notifications to the user in a way that’s in their control.

Some of the apps that bring web applications to your desktop like Tweetie, Google Notifier, etc will integrate with Growl (which has a counterpart on pretty much every platform, including the iPhone). The problem is that you only get these notifications when the desktop apps are running, despite the fact web apps are always running. And yes, you have to have an app running for each web application you use.

And that’s only if they built a desktop app and you were convinced to download it. Most web applications are never be able to notify you with any means other than email. But as I’ve argued before, notifications don’t belong in your inbox!

Another minor point is that all these apps use polling to get updates. In some cases this doesn’t matter, but as data starts moving in real-time, this batches your notifications into bursts that you may not be able to parse all at once. I use Tweetie to get Growl notifications from Twitter at the moment, and if a lot of people are updating, I get a huge screen of updates that I don’t have time to read before they disappear. It becomes useless.

A while back I attempted to make an app called Yapper that lets anybody send real-time notifications to your desktop via XMPP. It was an experiment, and ultimately not the answer. It was only part of the solution.

But today I’m announcing the full solution: a free, public, open-source web service called Notify.io (Notify-I-O).

Notify.io integrates with Growl and other local notifiers (as well as email, Jabber, Twitter, and webhooks) and provides a dead-simple API for any web developer to send real-time notifications to their users.

You can think of Notify.io as a web-level Growl system. It empowers users with a consistent, controllable way to get notifications, and it provides developers with a simple, consistent way for sending those notifications.

Notify.io is an open platform for notifications. It’s still in a pre-alpha state, but it already has several useful notification sources. Last Thursday I built Feed Notifier, which uses PubSubHubbub to give you real-time desktop notifications of Atom and RSS feed updates.

At SHDH 35 last Saturday, Abi Raja built a Facebook notification adapter for Notify.io that’s yet to be released. And there a couple more in the pipeline (by me and others) to show the power of Notify.io.

Again, it’s pre-alpha, so before I talk much more about it, I should probably finish more of it. I just wanted to make sure I blogged about it in somewhat of a timely fashion. I seem to have a backlog of blog posts about apps I’ve built recently. However, Notify.io is a pretty significant one. Feel free to check it out, just remember that despite its looks, it’s nowhere near finished — but it does work.

“Efficiency is doing things right; effectiveness is doing the right things.” –Peter Drucker

When people, usually analytical people, want to improve a situation, they tend to optimize efficiency: achieve maximum output for input. “Let’s reduce waste! Let’s simplify! Let’s make things smoother! Let’s try and get more out of the system!” I suppose the obsession with efficiency is explained in the Drucker quote: that efficiency is “doing things right.” Who wouldn’t want to do things right?

The problem with efficiency is that it has nothing to do with whether or not what you are currently doing is the right thing to do. Whereas effectiveness is about achieving the right result, or being on the right path.

Too many people assume a system is on the right path. If there is a problem, they address it by smoothing things out and making the process more efficient without questioning the larger system they were produced in. But if the system is going in the wrong direction, that’s only going to make the real problem worse. The push for more standardized testing in public education comes to mind.

What’s really important is effectiveness. In the end, it doesn’t matter if your business is spending the least amount possible or your computer program running as fast as possible or your lifestyle entirely streamlined. If it’s effective, it achieves the desired goal. Your business is producing value, your program is functionally useful, your lifestyle is making you happy. Effectiveness is qualitative. Efficiency is quantitative, which is why I think it’s so big with analytical people. In fact, intelligent people in general.

If you think about it, intelligence, especially knowledge, is mostly concerned with efficiency. It’s more about how to solve problems, less so with what problems to solve. Knowledge is a tool. It’s neutral. To what ends do you actually use it for? That requires values and intention–the realm of wisdom. A wise person tends to be an effective person.

When approaching a problem, wisdom and pragmatism must frame intelligence. Before you start thinking about efficiency, you should step back and think about effectiveness. In computer engineering this idea spread with Donald Knuth’s quote “premature optimization is the root of all evil.” His argument being that 97% of efficiency optimizations are unnecessary to achieve functionality, at which point you can determine which optimizations will be the most effective improvements of efficiency.

In a way, it’s a look before you leap argument. Don’t get me wrong with all this. Efficiency is terribly valuable and can improve a situation, but only if you’re on the right path. Just because a system is currently working or was previously working doesn’t mean it should be, or will in the future. You should always consider effectiveness before efficiency, even in “working” systems. Here’s why:

Effectiveness opens the door for efficiency, but efficiency can change the requirements for effectiveness.

Quantitative improvements can qualitatively change the situation if taken far enough. The game can change. For example, you can become so efficient at producing cars that production isn’t the problem anymore. Then it’s a question of variety, like choice of color. “You can have any color as long as that color is black,” Ford said, and soon after lost the lead in car manufacturing. Perhaps when business slowed, they tried to make their sales and marketing or administrative organization more efficient. They didn’t re-asses whether the thing they were doing so right (building cars so efficiently)… was the right thing. The effective thing.

Efficiency is important, but powerless without effectiveness. Always keep an eye on effectiveness.

Public Open Source Services

October 29, 2009

Last night I went off and put up a wiki about an idea I’ve been thinking about for a while: public open source services or POSS. Think: public services or utilities on the web run as open source.

Unlike open source software, web services aren’t just source code. They’re source code that runs. They have to be maintained in order to keep running, and the resources they consume have to be paid for. This is why most web services are built using a business as the vehicle. This effectively constrains what you can build by framing it as something that needs to turn a profit or support you to work on it. But does it need to be that way? Can web services be built in a way that make it self-sufficient? Not needing some ambivalent leader to take responsibility for it?

I originally blogged about it in February 2007, 6 months after I first wrote about webhooks. Unfortunately my old blog isn’t online right now. Back then, I was trying to solve the administrative problem. How do you maintain the servers in an open source way? My idea then, was to build a self-managing system using something like cfengine or Puppet, where the recipes and configurations are kept with the publicly available source code. As new configurations are checked in, the server(s) adopt the new directives and continue to self-manage.

The practicality of such a setup is a little far fetched, but seemed pretty feasible for smaller projects. However, since the release of Google App Engine, this concern for simple web applications has disappeared. Google just automates the system administration, and scaling! This means to run the app, you just have to write the code and hit deploy. That’s a huge step! Administration concerns? Pretty much solved.

The next thing is the financial concern. How do you pay for it? Or rather, how does it pay for itself? This took longer to figure out, but here we are. From the wiki essay:

You use the same Google Merchant account that App Engine debits as the one that accepts donations. This way no bank account is involved. Then you track the money that goes into the account (using the Google Merchant IPN equivalent). Then you look at your usage stats from the App Engine panel and predicate future usage trends. Then calculate the cost per month. Then divide the cash in the account by that and you have how long the service will run. You make this visible on all pages (at the bottom, say) that this service will run for X months, “Pay now to keep it running.” You accept any amount, but you are completely clear about what the costs are. And this is all automated.

Take the humans out of the loop! (That’s a WarGames reference)

Then you rely on the same sort of community approach of open source to contribute to the application. Like a few members of the project community are given certain rights, some will be given permission to deploy the app from time to time for updating the running service.

If the service isn’t useful, nobody uses it, it’s not paid for, it disappears. If it is useful, people will pay for it to keep it running. They are assured they are paying operating costs, which are significantly lower than most because it doesn’t include paying for human resources! Volunteers might need to meddle with settings, but otherwise, the coders are in control and the community accepts or denies changes made by whoever wants them.

So if this is interesting, read the full essay I wrote up on the wiki. It’s been my intention to prototype and validate this model with many of my projects.

Meet me at SXSW 2010 (http://sxsw.com) Last week I got an email that said my SXSW speaking proposal was accepted. Strangely, my joy was coupled with a bit of disappointed. After the PanelPicker closed, I felt like I didn’t market my talk well enough. That combined with my OSCON proposal being denied, I felt there was no chance I was going to speak at SXSW. However, the more I thought about it, the more I liked the sound of it. I’ve had such a busy year going to conferences, speaking, writing, and building stuff. You all realize this webhooks stuff doesn’t pay, right? I was actually looking forward to not having a huge stressful talk to worry about.

But now I do! Lucky for you, I’ve decided to make it a really great talk. I’ve started work on it immediately and will be developing it probably up until March.

What is the talk? It’s not another “WebHooks are the Future” talk. Well it is, but in disguise. I’ve decided to focus on context. Like I mentioned before, all that vision stuff usually at the end of my talks will come front and center. The talk is called: How WebHooks Will Make Us All Programmers.

In order to describe this, I’ll need to first explain webhooks to those that are unfamiliar. I feel this is getting easier and easier as I gain experience and more tangible examples pop up, like PubSubHubbub and TweetHook. However, I’ll have to take it up a notch further because I won’t have that much time to talk about them. Before I can explain how webhooks will make us all programmers, I also have to share why this is worth doing at all. I have to explain why we should all be programmers.

Yay! It’s a sort of philosophy of computer science. I mean, that’s the stuff I really love talking about, so why don’t I go all out? Well I will.

But I have a lot of work in front of me. I need to finish building more examples. I’ve finally gotten to a point and built enough infrastructure where I can really start showing the power of webhooks through examples. I need to develop a super concise explanation of webhooks, probably something visual and animated — a mini presentation within a presentation. I also need to start practicing speech again. The last time I went without practice it was a disaster, despite getting my point across.

Part of my preparation will involve lots of writing and sharing on this blog. I need to prototype and validate individual ideas before I bring them all together in an epic talk. That’s the only way to make it as awesome as it should be. It also means there will be lots of follow up material for attendees.

So at the very least, my goal is to post something relating to the ideas in this talk once a month. And your feedback will be much appreciated.

It’s near time for slutty domain registrars and confusing DNS hosts to die in a fire.

I’ve wanted to reinvent the domain registration and management experience for a while. Every time I use what’s out there now I die a little inside. I have over 100 domains and I register new ones fairly often. Here’s my current experience:

I start with Instant Domain Search when I think of a domain or need to come up with a domain. Real-time check-as-you-type really helps you in the brainstorming process. There’s actually nothing wrong with this. It’s the most fun part of the experience, but it ends here.

Then I register. If it’s not a fancy TLD like .io or something, I use the registrar that I have most my domains with: Cheap Domain Registration. This is perhaps the worst part. It’s basically a GoDaddy reseller that I stumbled across a long time ago and started registering domains with. Since I’d rather not have my domains across several crappy registrars, I’ve decided to stick with them. Plus, it’s such a frickin pain to transfer domains. I’ve done it a few times and I still don’t even know how it works.

Anyway, it’s effectively GoDaddy, which is the most popular registrar. I don’t exactly know why. It’s probably the sluttiest of them all. It’s so noisy, fake, and slow. Decent prices, but of course they’re going to try and upsell you in every way possible. They got me once because I was in a hurry and I clicked the wrong thing. It’s at least (yes, they give you the option for more) 3 pages of upsell offers.

However, it does have good support, which is important because DNS and domains are such a pain to novices. I sort of like the fact they call me sometimes after registering asking if I got everything set up. I told them never to call me again, but that I’m happy they’re doing that.

That good karma goes out the window when you try and manage your domains. This is the slowest part, and the second worst part of the experience next to avoiding all the upsell traps. Luckily I don’t need to use it ever except for DNS.

Now, it’s nice of them to provide free DNS, but it’s so hard to get to and so clunky once I’m there. I usually want to use EveryDNS just for that, but I still have to use their interface to point my domain to EveryDNS. I also tend to use their web redirect for making naked domains go to www, since each one of those would use up my limited number of records on EveryDNS. So I’m stuck with them for that usually.

Once I’ve got it pointed to EveryDNS, it’s pretty okay. The EveryDNS interface is not so pretty, but it’s quick and to the point. I remember getting a bit confused in the early days, partly due to the interface and partly due to DNS not being the most user friendly of technologies. Unfortunately my free account can only have 20 records, including web redirects. I should probably just donate and get that lifted, but I suppose I’m lazy. I usually just swap out domains I’m not using anymore, or end up using my registrar’s DNS for simple domains.

In the end I’m using up to three systems, DNS and registration both being quite a hassle, particularly in the setup. But if you register a lot of domains, make a lot of sites, you’re in setup mode quite a bit. There are a lot of things that could be better, from the UI to the sales process. It could all be one nice solution that’s just done right.

So I decided to start working on that. It’s called domdori, which is short for “domains done right.” The core experience looks like this:

You find a domain with real-time search. Then you use Amazon 1-click payment to buy the domain right there. You now have the domain. The default records don’t make your new domain point to some ugly, slutty landing page advertisement. The default landing page is whatever you make it. In fact, the DNS settings can default to whatever you want. You get not only an advanced DNS manager UI for power users, but a very straightforward DNS manager UI with smart defaults and complexity abstracted away for most users.

That alone would just completely make my day, but there’s more (in a “less” sort of way). Only, we’ll save that for later. Until then, a public alpha of domdori is approaching…

This is about ten days old, but still worth mentioning. I was working on PostBin trying to support file uploads. I realized I really wanted to have icons for the files to help differentiate them from other POST params and make it feel more polished. Of course, this would mean I’d have to find icons for the popular file/mime types … and if I were going to do that, I might as well build a service … but I didn’t want to build a service.

I turned to Twitter. In that past, I’d mentioned wanting something and somebody actually built it (using my tools no less). I figured this was a bit more work than last time, but it couldn’t be that much more. So I gave it a shot and tweeted it. Next thing I know, Paul Tarjan is on it. Some hours later: stdicon.com

I’m credited at the bottom, but really, I just had the idea and came up with the domain. Paul wrote all the code and even collected all the icons (and manually uploaded them to the app). We discussed implementation details over IM, but that was it. Pretty rad!

The idea is that given a file extension (“txt”, “gif”, etc) or a mimetype (“text/plain”, “application/zip”, etc) you can get a resizable icon by putting together a simple URL, like http://www.stdicon.com/mp3?size=16 or from a particular icon set http://www.stdicon.com/neu/html?size=64. Here are some examples from various sets:

Nuvola set: text/plain jpg application/pdf mp3
Apache set: text/plain jpg application/pdf mp3
Crystal set: text/plain jpg application/pdf mp3

And that’s it! Paul wrote a post about it. Somebody requested a simple API for just doing the mimetype to file extension conversion, so Paul added it. Strangely, the same day, MIME API was released. That’s fine. Just means stdicon can focus on icons.

Anyway, I pulled this same stunt just the other day, using Twitter to get more cool infrastructure built, but I’ll have to write about it later.

Oh, webhooks. What have you become? Just another buzzword for the rising realtime web trend? I admit, without a spec or obvious definition, it seems to lend itself to such a fate. It’s kind of like AJAX. Although, those that know the true meaning of AJAX exists mostly in first letter, and know that it is actually a significant and useful pattern, should know the same of webhooks.

In fact, those that are familiar with the heart of AJAX can even compare webhooks mechanically to AJAX. It’s like an inverted, backend, server-to-server version. Yeah? Okay, that’s a stretch. Maybe I won’t open my SXSW talk with that description.

Actually, I do want to start my SXSW talk backwards. What I usually leave until the end of my webhooks talks, and as such tend to skim over for lack of time, I’m going to make the focus on my next major talk. You see, some people just don’t get it. I think starting with the mechanism and extrapolating doesn’t work if people get stuck on the mechanism.

I questioned both Anil Dash and Mark Cuban‘s posts about webhooks and PubSubHubbub. It turns out they both do actually get it, they just realize they need to simplify it for people to understand. The same reason PubSubHubbub is focusing on feeds for now as opposed to general HTTP pubsub.

It’s not that people are stupid. Or even that we’re smarter than others. The magic of webhooks is in the emergence of an event-driven programmable web, something that’s not terribly obvious when looking at what webhooks are in of themselves: callbacks over HTTP. Most people don’t even see that; they see notifications over HTTP.

You can compare it to Go. A game you can learn to play so quickly, but can spend a lifetime to master. More importantly, after you learn the rules and do some simple scenarios, you think you get the game… but as you play more, it becomes something else. What it’s “really about” emerges as you actually experience it.

It reminds me of some people that say “Well, you can do that with XMPP; that’s what you should really use,” when it turns out they’ve never really programmed a system with XMPP, and definitely not with webhooks. They have no idea the convenience of webhooks over XMPP, or what that affords. Even for those that believe webhooks sound nice in theory, they’ll go implement it and come out going, “Wow, this really is quite cool,” as PBworks CTO said after implementing webhooks in PBworks.

Anyway, I haven’t even gotten to answering my question: What are webhooks really about? The real answer is that they’re about something completely different!

I was standing next to my long-time colleague Adam Smith (we built AjaxWar together in 2005, before Comet had a name) as he read Mark Cuban’s post. After, he remarked, “Well it’s good to know you’re still two steps ahead.” Yep. ^_^

You see, the funny thing is that webhooks aren’t even the ultimate goal. They’re a means to an end. That’s what my next talk is about, only I decided against “Beyond WebHooks” … instead I went with “How WebHooks Will Make Us All Programmers.”

Have I been pushing an agenda to subvert the masses and make us all programmers? I’ll admit that’d be nice, but really I just think it’s the most interesting outcome of the progression I see going on. Honestly, if explained properly, I think it’s something we can all get behind. In fact, I explained webhooks to Cheri Renee at the last DevHouse from this perspective and it actually got her excited. She’s not even a programmer.

So that’s the angle for my next talk on webhooks. The real big picture. At least for me, that’s what webhooks are really about. Hopefully I’ll find some time to get into more details before the talk. Until then, please vote for the proposal so I can finally make it to SXSW (and give this talk)!