We’re all well-acquainted with the concept of workload elasticity, which involves the capability to scale workloads up or down as needed. However, due to the intricate nature of deploying this feature, many organizations resort to various methods and equipment to achieve it. In this blog post, I will demonstrate how cloud elasticity can be effortlessly and rapidly implemented.
A few months ago, during my tenure as an MVP, I was approached by a company seeking assistance with design principles for their work-in-progress application and Azure integration. This company specializes in third-party logistics and has a keen interest in migrating its on-premises infrastructure to Azure. Additionally, they aimed to revamp their existing software architecture to make it more compatible with Azure’s cloud hosting capabilities.
Discovery
To provide effective assistance, I initiated my engagement with a thorough discovery stage. During this phase, I delved into understanding the intricacies of the company’s operations, examining both the “what” and “how” of their business processes. Additionally, I closely assessed the tools and technologies they currently employ to facilitate their daily business functions. This comprehensive exploration allowed me to gain valuable insights into the organization’s unique needs and challenges, paving the way for informed and strategic guidance.
During the discovery phase, I identified the below related to the excited software:
- Hosted locally
- web App
- Big Code
However, one of the more concerning discoveries was related to the software delivery process. While the team had performed admirably over the years, there were some crucial aspects that had been overlooked.
- No roadmap
- No clear sprints
- No Agile
- Releases time
Planning
Following the completion of the discovery phase, we reached a consensus on two pivotal decisions. Firstly, we collectively determined that transitioning the development team to embrace the Agile methodology, complete with a well-defined roadmap and sprint-based development cycles, was imperative for optimizing productivity and project management. Secondly, we committed to migrating the core software to a microservices architecture and hosting it on Azure. This strategic move was driven by the goals of enhancing manageability, improving scalability, and realizing substantial cost savings for the organization.
Implementation
After conducting thorough research, our team arrived at the conclusion that the optimal technology choice for migrating the application would be Azure Container Apps. This decision was grounded in several compelling reasons:
- Easy Deployment: Azure Container Apps offers a streamlined deployment process, simplifying the transition of our application to the cloud environment.
- Scalability: The platform’s inherent scalability features enable us to effortlessly scale our application up or down as needed to meet changing demands.
- Version Control: Azure Container Apps provides robust version control capabilities, making it straightforward to manage and release new versions of our application.
- Continuous Integration and Continuous Delivery (CI/CD): The platform supports CI/CD pipelines, allowing us to implement continuous integration and delivery practices, ensuring smooth and efficient software development and deployment processes.
To commence our exciting new journey, I assigned the team the initial task of creating the Azure Container App resources. I’m pleased to report that the team swiftly and efficiently completed this step by meticulously following the Azure deployment instructions for Container Apps. Their adeptness in executing this task sets a promising precedent for our transition to this technology.
Elasticity
There are several compelling reasons behind my decision to opt for Container App implementation as the means to deliver this project. One of the foremost reasons is the flexibility it offers in terms of workload elasticity and deployment ease. Given the seasonal peaks that our company experiences, it is crucial that our application can seamlessly scale up during high-demand periods and scale down when the load subsides. By implementing Azure Container App and following these steps, we can confidently address the dynamic demands of our business, ensuring that our application remains responsive and cost-effective, even during the busiest seasons.
- Browse to the container you wish to implement the elasticity on, and from the side menu select Scale and Replicas

- Click “Edit and Deploy” then “Scal” and set the minimum number of running containers when a peak is occurring:

Important Note: In the setup described above, I opted to run a minimum of one container, but it will dynamically scale up to a maximum of five containers during peak usage periods.
- In the “Scale Rule” configuration, I included a condition stating that when there are 100 HTTP connections, the rule will trigger and initiate scaling to a maximum of 5 instances.

- Confirm the configuration.
How to test
In order to assess and elucidate the advantages of containerization during peak usage times and how this architectural decision results in cost-efficiency, I utilized JMeter. I conducted a stress test on the web application to provide a visual representation of how the number of containers increased to 5 within the “Scale and Replicas – Replicas” configuration.

Subsequently, once the load decreased, the number of containers returned to its standard configuration, which is one.

Summary:
In this blog post, I emphasized the advantages of leveraging Azure Container App for Third-Party Logistics (3PL) organizations. There are numerous use cases that drove my decision to choose Container App, and in the upcoming blog posts, I will delve into these benefits in greater detail. I will discuss how I have assisted 3PL organizations in delivering their services through web applications while simultaneously achieving significant reductions in their IT budgets. Stay tuned for more insights in the coming blogs.