You guys have helped IntenseDebate grow a huge amount over the past few months. Overall this is great! We love helping people take the conversation on their blog / website to the next level, and of course once you get IntenseDebate going you know you grow even faster. It has put quite a strain on our systems though. I’m sure you’re all aware of our outages over the last few weeks.
We’ve been working non-stop to address these issues. Our highest priority is improving the performance and long term scalability of IntenseDebate for you.
Now we could leave it at that, be completely opaque about what changes we’re doing and just hope that you’ll trust that we’re working hard on the problem, but you all deserve more and need to trust that our service can perform for you. So here’s a behind-the-scenes peek for those of you interested in how we’re tackling this problem.
The first major change we’ve been working on is sharding some of our larger and most frequently used tables into a more efficient and scalable database schema. For those of you not familiar with sharding it means storing all your data in several smaller tables instead of one large monolithic table. This allows us to get your data much more quickly and makes for a much more horizontally scalable system. (We can continue to add more tables to keep them small as our total data storage increases.) This is complete for our most heavily used data, comments, and will continue where appropriate.
The second change is creating more summary tables to simplify the retrieval of common data. Even after sharding some of the data is still too slow to compute on-the-fly. Summary tables cache frequently used computations which will allow us to get this information almost instantly.
Last, but not least, we’re optimizing our logging and background processing. We do quite a bit of logging to ensure things are running smoothly and to help debug quickly when things don’t. In particular the syncing process to and from WordPress blogs has a lot of activity we store for troubleshooting. We’ve done some work to make this logging faster and less system heavy, as well as some fine tuning of what and when we log in order to help ensure that these troubleshooting tools don’t have a negative impact on the performance of the core service.
I want to reiterate we’re acutely aware of performance over the last few weeks and there is no excuse for poor service or performance. We’ve fixed the most pressing issues, but we’re taking a number of proactive steps to ensure that we can maintain the level of service that you deserve from us. Thank you all for your patience and understanding.