Something wonderful has happened at Social Blade! We’ve spent the last 45 days preparing some much needed upgrades to the website and all the services we provide. It’s finally here: Social Blade has reached for the sky and has risen to the cloud! We’re now using Google Cloud and it rocks!
Our code monkeys have been hard at work migrating everything while changing our architecture in a major way. The result of all this development will be a better and faster experience for everyone.
How has Social Blade grown?
Like all fantastic websites, Social Blade started with a single server which simply powered everything. It was an easy setup which was convenient and functional. As the company began to grow we needed to make many changes along the day both design-wise, and architecturally.
We hit top 4000 global websites in Alexa and continued to grow at an impressive rate. Social Blade started to feel like it was approaching its hardware limits so changes had to be made. We are now almost in the top 1000 global websites in the world. This is a catalog of the changes and milestones that Social Blade achieved along the way.
We split off the site into 3 main server boxes: 1 for the front end, and two for the database (in a master -> slave setup which allowed for redundancy). We began to offer more features like real time analytics, as well as an in house API for our statistics which now power the chrome extension and soon the website.
Popularity of Real time Statistics
With our continued growth of the real time pages, bursts of traffic, and an ever-increasing database: we needed to make realtime it’s own server. With these changes came the need to find a better caching solution as well: redis.
There were times about half a year ago where a shout out from one of our favorite Swedish YouTubers could drown us in traffic and force us to our limits. At that point we scrambled and made code more efficient which helped for a little while (but again we began to hit more hardware limits). We quickly realized that we needed to expand and prepare Social Blade for the future.
Social Blade continues to grow and we couldn’t do it without the power of the community. We’ve now load balanced multiple application layers (like the website, and realtime), while setting up multiple redis clusters to help cache the site better. Rather than going with Amazon we ended up deciding to try Google Cloud for our migration and it’s been a wonderful learning experience for the developers.
Our code monkeys have been hard at work debugging any encountered issues (and there was a lot of issues). We partially rewrote the whole website and made the switch from apache2 to nginx. During this process we also rinsed through and scoured all bug reports to fix any issues we’ve seen and that have been reported.
Social Blade is a monolith of architecture. In it’s totality, Social Blade is powered by:
- 14 servers
- 2 load balancers (with auto-scaling capabilties for burst traffic)
- 50+ Processors
- 250GB+ of RAM
- 1TB+ database
- blood sweat and tears
Migrating all of Social Blade has definitely been a challenge that would test the sanity of any developer. There have been multiple rewrites of numerous systems and various important sections were ripped out to prepare for revisions or necessary improvements. Google Cloud brought a few challenges that we were not prepared for. Whether it was the load balancing setup or Google SQL proxy: it was a headache.
I will say this: the Google Cloud interface is welcoming and easy to navigate compared to Amazon. It is exactly what you’d expect from any Google products.
Overall this has been a positive experience. We’ve been testing and re-testing many aspects of the site while we prepared to launch into the cloud. This has been a gruesome, taxing, yet rewarding process which allowed our developers to grow and improve much of the original design.
What this means for the end user is a faster website that will be able to achieve a much higher volume of traffic without any major slow downs. Our traffic spikes will no longer wake developers up at 2am in the morning to scramble to restart services that have gone done. We hope to bring a better experience to our end users, and allow for this website & community to grow together.