JMeter vs Selenium
We are often asked what we use for performance testing of web applications, what we use for functional testing and what we recommend.
Our first response is to find out what is under test. A web service or browser based application lends themselves to being tested by both of our chosen technologies; JMeter and Selenium.
First up let’s be clear – the answer is often “both” and not one or the other.
In short JMeter tests load based performance and Selenium tests functionality. While you can bend each to do the other, these days there really is no need.
We think of it this way:
JMeter is very good at sending a whole bunch of HTTP/S at a server/URL. You can define ramp up and ramp down profiles and you can measure response times at the varying load levels. JMeter does not do very much with the responses, it does not execute the browser-side components and is thus not measuring the performance as seen by the end user – rather the performance of the server-side components. It is certainly not emulating any particular browser type nor taking the time to construct a DOM.
Selenium on the other hand is much more interested in what happens on, to and in the browser. It executes all of the client side componentry and as a result it is a very high computing load and thus cannot not scale well.
So, in summary, the answer to the question “what do you recommend”:
We tend to use JMeter to add load to the server and to see how it scales and then to see if a change has improved the load-based performance or degraded it – this makes sense as the growth in concurrent sessions has a direct performance-affect on the central server-side componentry .
We tend to use Selenium to test the functionality of the application as perceived by the end user and to perform ongoing and relentless cross-browser and regression testing.
Finally we blend the two by getting some known load on the server-side (using JMeter if natural load is not sufficient or predicatable) and we then measure the impact of that load on the browser (using Selenium or a Selenium-like distributed solution) and thus the impact on the end-user experience. While the affect of concurrent sessions does have a direct impact on server-side performance there is an indirect impact of any change in server-side performance on browser-based end-user experience which is lost if you don’t plan to perform blended testing.
Our footnote would be to urge you to start building a suite of easily repeated tests (in DevOps parlance this may be in a ‘playbook’, ‘recipe’ or ‘runbook’) that are performed constantly as part of your business as ususal application monitoring regime, regardless of need or trigger event.
Trend these results, either in the source tool or by throwing them into a logging tool like Splunk.
This one operational behaviour will result in better sleep at night for the operations team and a confidence at release time for the development team. Summaries of these trends will help explain to the exec-team you have this stuff under control too and ramping up the loads in anticipation of seasonal application peaks becomes a keystroke change rather than another IT project.
We can help – we offer both of these testing platforms in the cloud – you can drive them or we will do that for you. We can help you put in place a plan for continuous testing at a cost effective price. Scaling to thousands of sessions with no need for infrastructure at your place.
If you are not ready for that then we can also just chat over a coffee to make sure you are headed in the right direction.