Like a lot of sites, we display a list of recent blog posts on our homepage. The mechanism by which we get it there has gone through several revisions over the years, and we found it interesting to reflect on the path we’ve took.
1) In the first implementation our application server pulled the RSS feed from the blog inline while generating the response. This is probably the simplest, dumbest thing that could possibly work. I always like starting with the simplest thing that can work. But of course this is slow: the user has to wait for two server responses just to see the homepage, and our app servers are blocked waiting to hear from the blog. We could do better.
2) Well, what’s the simplest way to get the feed there, but not hold up the page waiting for it? Give the user the homepage without the blog content and let them pull that bit asynchronously. The client will still have to wait a little bit for that chunk of content, but the rest of the page will show up quicker. The blog feed probably won’t be the first thing the user’s going to look at anyway. Pretty decent.
3) At some point we decided that our whole site, including the homepage, should run over HTTPS. Sounds like a tangent, right? What does this have to do with the blog feed? Mixed content warnings, that’s what.
You probably know this already, but when you are on a secure page and make an insecure request (like to a blog RSS feed that runs over HTTP), you’re browser’s going to get all, “Yo, dawg, not cool.” Like, ugh.
4) There was no reason for our blog to emigrate to SSL land, so the homepage would have to call the application server to get the feed. Not a big deal there. Really all we need is the titles and URLs for the five most recent entries, and those don’t change very frequently. So we set up a periodic job to call over to the blog, get the RSS feed, and toss the data we care about into Memcached. When the client makes the asynchronous request for the feed, we serve up this small chunk of cached data. Easy and super fast. Game on!
5) And then came the day that Chris Felt Meddlesome. “Uh, guys, why do we still have our users making this extra request?”
Yeah, ok. Once upon a time splitting off that request sped up general page load times. Now it was just in the way. The data we need is always going to be coming from a high speed cache, so our initial troubles with app server blocking are gone. If we moved the feed back into the original page generation we’d eliminate an extra request. So move it we did. And it was good.
So did we just come full circle? Did it take us four changes to fix one inefficiency? Maybe. But each time we did just enough to fix the single problem in front of us. It wasn’t perfect from the start, but it was working, and it was a simple enough implementation that we could easily tweak it over time.