Optimizing Web Page Response Time Using Open Source ToolsJuly 28, 2022 No Comments
Featured article by Uzair Nazeer
Picking a topic that generates much interest, in this article, we are going to discuss optimizing web page response time using open source tools. Modern-day web applications are getting more distributed and diverse, and are using multiple technologies (both in terms of components and languages). And the application response time completely relies on all the components and optimizing every component is a challenging task for the performance engineer.
But trust me, it can be fun investigating the issues.
To gain an in-depth understanding of performance optimization, we will first discuss client-side vs server-side performance. Thereafter, we will look at tools that can help us measure performance and facets of measuring performance, like monitoring, observation, and alerting.
Before we proceed, we need to answer a few basic questions.
What is application performance?
Application performance is the evaluation of how an application is responding to/performing request(s) under various user loads.
How do we measure it?
We can measure it in various ways: either on production (suboptimal—incurs high cost when fixing issues) or we can mimic a prod environment and use various tools to replicate the user load behavior to see how fast it is responding (optimal—helps avoid last minute surprises). The measurement is not only of response times but also of resource utilization.
The client-side performance is a bit different from the application server response. It uses the static content + application server response to create a single output for the end user.
The static content will be given to the end user within the web tier and the dynamic content will be provided by the app tier with the help of a DB or other orchestration services.
Once each of these are provided to the browser, the browser tries to paint the web page using the given HTML, CSS, JS and other web resources. The visualizing of this content is called rendering. The rendering time is the time that is being measured in order to evaluate the end user response time. The rendering time is the summation of network time + server response time + render time.
There are many open-source tools available for measuring the rendering time.
Apache JMeter (using Selenium webdriver):
Jmeter is one the best open source tools that supports multiple protocols, easily extensible and easy to use. It can support both functional and non-functional testing. It consists of a vast user community and has an expanding list of plugins to support other protocols and functionalities that are not available by default.
Chrome Developer Tools:
Chrome Developer Tools are very handy and are available as a browser extension. It provides a very detailed analysis of the resources that were used in loading the web page, network time, server response time, render time, etc.
Lighthouse is an automated tool and a part of Google Chrome. It audits the quality (performance, accessibility, SEO, etc.) of web pages and provides suggestions on how to improve them.
Web Page Test
WebPagetest is an open-source tool that is very useful in measuring the performance of web pages. It provides a lot of options to configure the test and also has a vast user community.
GTMetrix is a very popular tool that provides a detailed analysis of web page performance. It also provides recommendations on how to improve the performance.
A multi-tiered architecture makes for a highly reliable application. This is because the structure is highly distributed, which supports resiliency, reliability, and scalability even under heavy load. In such a structure, the load is distributed among the same instances/nodes using a load balancer to avoid stressing the same instances.
When it comes to an application’s architecture, we can usually divide it into 3 tiers:
1. Web tier (Presentation to manage static content)
2. App tier (Business logic)
3. Data storage tier (DB)
The client-side performance relies on multiple components within the architecture. To evaluate the performance, we need to use the right tool to mimic the exact production load.
Performance degradation can occur in any area within the architecture. To understand it one must know the architectural flow of the application.
Also, the application access logs will be a good place to start. Check the time taken at each layer to initialize with the debug.
Monitoring is essential for measuring performance. Again, we are blessed with so many open source monitoring and APM tools. Monitoring provides the resource utilization information for various components in a distributed architecture.
One of the most widely used open source monitoring tools is the Grafana + Prometheus/InfluxDB combo, which can store resource utilization information and render it in a very effective way to pinpoint the issues.
For tracing the request on the degradation of response over multiple-tiers in a hassle-free way, we can make use of APM tools in our architecture to get that information.
Some of the more renowned open source APM tools are Apache Skywalking, Pinpoint, and SigNoz.
Observability is becoming a key replacement and game changer for monitoring applications. It provides a stage for engineers to monitor resource metrics and identify the root cause of issues. Tools like Opentelemetry and Opstrace provide full flexibility for achieving observability using open source tools.
Client-Side vs Server-Side Performance
Not all performance issues arise with application servers/components in the architecture. It is possible that all our application components perform fast, but the end user still experiences sluggish speeds.
Application performance is composed of many factors. Our server may respond faster but it could be the end user’s device/browser/network/UI rendering that is slow. This may be a result of poor implementation of HTML, CSS, or JS. It could also be that the server is not configured to compress data before sending it to the browser.
Compression of data can make a big difference in network traffic and load time for the user. As an example, let’s say our application server is located in Europe and our end user is in Australia. If there is no compression, every time the user requests a page, all the HTML, CSS, JS, images etc. have to travel all the way from Europe to Australia which will take some time. But if we compress all this data into a ZIP file and send it, it will reach the user much faster as the size of the data will be smaller.
Network latency can also play a role in performance. If you are not familiar with the term, network latency is basically the time it takes for data to travel from one point to another. The further away the user is from the server, the higher the latency and consequently, the slower the performance.
There are many other factors that can affect application performance such as different browser types and versions, different devices, screen resolutions etc. As you can see, there are quite a few things to consider when trying to improve performance.
Monitoring the performance of our applications is essential for ensuring that they are running smoothly and efficiently. There are many open source tools available for measuring the performance of our applications. We need to choose the right tool for the job at hand, whether it’s load testing, monitoring or observability. We also need to keep in mind that performance is not just about the speed of our applications, but also about how well they handle failures and unexpected loads.
APPLICATION INTEGRATION, CLOUD DATA, DATA and ANALYTICS , DATA PRIVACY, DATA SECURITY, SOCIAL BUSINESS