← Back to the Blog

ReviewSignal Testing: A Conetix Perspective

By Tim Butler
ReviewSignal Testing: A Conetix Perspective

When the opportunity arose to be part of an unbiased performance test which looked to be including regional focus, we decided to be part of this testing. As we are no strangers to benchmarks (having compared webservers as well as WordPress versions), this seemed like a good opportunity to see how we performed compared to the competition.

We had expected that other Australian providers would also be participating and this is who we wanted to compare ourselves against. Unfortunately, it seems like we were the only participant from Australia and the unique differences of our system meant in a benchmark scenario we were disadvantaged.

Australian Internet Connectivity

Firstly, as anyone local would know Australia is different to the rest of the world. Our international connectivity can be hit and miss at times, especially when undersea cables are broken all the time. We literally lose terabits per second worth of connectivity, and it forces other links to become congested very rapidly. As CloudFlare has also discovered, data costs are very high.

So what does this mean? If you conduct testing from outside of Australia, you’ll be subjected to high latencies and intermittent faults. Our focus is 100% on the Australian and New Zealand based market only. Similarly, our monitoring and metrics are conducted from Australia only since this is where our customer base is. How much difference does it make? Lots:


Like ReviewSignal, we also monitored our system via StatusCake. However, our results varied significantly. This is because we ensured that one of the Points of Presence for monitoring was based here in Australia. Uptime wise, August looks worse since we had a layer-7 DDOS attack, which wasn’t able to be automatically mitigated. As tracked by Panopta, here’s our network Uptime:


The next issue was caching. Kevin didn’t allow us to install any caching plugin within WordPress. This placed us at a significant disadvantage. We do this for 100% of our managed clients, so already the testing wasn’t going to be reflective of the actual performance our customers receive. This disappointed us greatly, especially since there was no longer any parity with both our own customers nor others we were being compared against.

Kevin’s reasoning for this is that it wasn’t part of the default installation, however in our experience less than 5% of customers install WordPress from scratch. Generally, they already have an existing site or have paid a developer to produce one for them. We don’t trigger automatic changes immediately on transfer, as history has shown us that there’s a high probability of causing errors. Instead, our systems audit the plugins and configuration, before triggering the installation (we use wp-cli via Salt).

Our real-world testing has shown that this is sufficient to ensure a site remains up without error with up to a million hits per hour. We even had our Nginx front-end servers pushed hard with malicious traffic, yet they coped with 100,000 connections per minute without faulting.


As I say in all of the benchmarks I publish myself, do your own testing. Every scenario is different and you need to validate your own set of test conditions. If you have existing metrics you know you need to meet, then this is a great start. If you need help with this, please feel free to contact one of our engineers to discuss it further. We have 15 years of hosting experience and always happy to assist.

Secondly, performance is only one part of the puzzle. Look at what your managed WordPress requirements are. What level of support do you need? Who does the updates? How does the platform fit in with your workflow? The good news is there’s plenty of choices. 

Lastly, despite not agreeing with some of the test conditions nor the results and conclusions, we thank Kevin for being one of the only unbiased hosting review companies out there. This can only be a good thing for the industry and I hope to see more of it.