It's been a whirl-wind week for me. Monday through Wed I was in Göteborg, Sweden at Scan Dev and today I was at Berlin Expert Days. Thankfully I gave the same talk at both conferences whose slides can be seen and downloaded here.
All code is also available on my GitHub page.
No more talks for a while please
One day last week a couple of co-workers asked about how one would go about implementing an API limiter for a "global scale" RESTful webservice. I thought about it briefly and assumed that most proxies would already have implemented this. Lo and behold, it's true as nginx, HAProxy and Squid all have ways to put limits on HTTP traffic. The only big problem with these is one of shared state. In other words, between multiple instances of a proxy, there isn't any.
So having never worked with node.js before and Redis only for experimenting, I figured it was time to try my hand at both and implement an HTTP proxy limiter. The result, node-rate-limiter-proxy, is up on GitHub. Fork it or let me know if it's useful. If you are in Rails/Rack you can also have a look at rack-throttle which has a few nice features and configuration options.
Of course with this implementation, I haven't quite solved the "global scale" requirement. To do so would require some fun multi-master replication since the "global scale" webservice we're talking about is currently deployed to 3 data centers around the world. Let's tackle that requirement next weekend! (or just assume that a user stays in the same datacenter for his entire session/duration of the timeout for the request limiting, which is probably also reasonable to start with)
Just finished my talk at JAOO "Continuous Deployment and DevOps: Deprecating Silos" with some good feedback and plenty of audience questions and participation. Thanks to my co-presenter Tom Sulston from ThoughtWorks too!
Slides are up and probably only useful if you read the speakers' notes or were there for the talk since they are mostly just pretty pictures!
UPDATE: Today@JAOO did an interview with Tom and I. Only a few misquotes but you can read it for yourself over here.
In my last post, "Google is breaking the web!", I discussed various approaches to customizing REST responses for mobile clients. The problem is, of course, complicated by reality and the fact that there are so many devices with varying abilities. What I discovered in my quest for fantastic cacheability and making client devs and device owners happy was something like the following.
I've been fighting (discussing) with some people about "response tailoring" REST APIs. That is, depending on the request or particular device making the request, you return a response that is tailored for that device (web, mobile: S60, iPhone, etc.). A few options for doing this (that I could think of):
- Support a well-known, defined set of
- Support a well-know, defined set of MIME/
- Support accessing sub-resources or partial resources through a URL tree, i.e.
places/1234, places/1234/addresses, places/1234/comments
- Allow for dynamically tailored partial resources through matrix or query params, i.e.
There are also multiple things to consider when dealing with mobile devices (non-exhaustive of course):
- Bandwidth: Overall bandwidth is a big deal as our success is measured on how efficiently and inexpensively we can deliver content to the consumer's device
- Latency: How long does the request take to the server (or intermediary cache) and the response to come back to the device
- Processing time: Mobile devices are inherently less powerful (particularly lower-end Symbian devices) and processing big JSON blobs is not always reasonable