Earlier this summer, Dave Weiner seemed to decide that he wasn’t going to convince anyone else to provide an alternative to Twitter by making RSS more realtime, so he set out to do it himself. In late July, he described his plans on RSSCloud.org, and just a few days ago he announced the first client-side implementation in his River2 RSS aggregator and just today, Matt Mullenweg of Automattic announced that all WordPress.com hosted blogs are enabled to give realtime notifications via RSSCloud (there is also an RSSCloud plugin for self-hosted WordPress).
Predictable, this progress has just turned up the volume on the nay-sayers. Some of their criticisms are reasonable, but once again the Internet reminds me how willing people are to speak with authority that is exceeded only by their ignorance (I leave it to the reader to decide what my arrogance to ignorance ratio is). I should know better than to wade in and engage with such people, but I did it anyway after seeing some of the comments on one critical post. Having made the effort, I thought I might as well repost it on my own blog. The comment that put me over the edge was from someone calling himself “Roger” flinging a criticism of those (Dave Weiner, presumably) who’d failed to learn the lessons of the past:
Roger (not Rogers), as our self-anointed historian, could you please recount the the casualties of the first great RSS aggregator invasion that you think fell in vain?
I remember the fear. I don’t remember much though in the way of casualties. People adjusted the default retry intervals their aggregators shipped with and implemented the client side of Etag’s and proper HTTP HEAD requests, server side software followed suit. The rough edges weren’t fixed overnight, but that was fine, since RSS didn’t take off overnight, and even with Automattic’s support, RSSCloud isn’t going to either.
As for those worried about all the poor webservers just getting hammered every time an update notification goes out to the cloud, is that really an issue? I mean, event driven webservers like nginx or lighttpd can retire something like 10K requests a second on relatively modest hardware and support thousands of concurrent at a time out of a few MB of memory. Yes, that throughput is for static files, but just how often is that RSS feed changing? Even if your RSS feed is served dynamically, you can put nginx in front of apache as a reverse proxy or whatever and set up a rule to cache requests to whatever your RSS feed URL is for 1s.
As for the strain caused by delivering the notifications themselves, the same techniques that have made it possible to serve thousands of requests a second from a modest server are applicable for sending notifications, thought, at this point. Someone just has to write a cloud server that uses them. They can probably start with the software script-kiddies use to send spam 🙂
The critique of the firewall issues, etc, are the only ones that make sense to me. It seems like that needs to be turned around to use one of the “comet” techniques.