One of the major innovations to hit the IT landscape alongside the cloud, Infrastructure as Code and other DevOps techniques in the past couple of years is the popularity of containers. Docker containers evolved from an experiment to a core aspect of the corporate IT culture. These days we have entire environments running on top of Kubernetes deployments, with containers being provisioned and destroyed in real time as needed. The beauty of a setup like this is that it makes scaling very easy, with new containers being made available on demand, while maintenance is also much easier since containers can be recreated from scratch using a template. Fans of the concept may be forgiven to think every situation can be made to fit into a container, but is that really the case? Are there cases when containers shouldn't be used?
First, containers can be adapted to most situations. With the freely available templates and images, it's probably the easiest way to build a simple Nginx web server, or to run a database engine. But if you're building specialized applications, you're probably not going to find pre-built images. That means you will need to spend more time creating your own base image along with all the necessary layers. So if you're looking to get started quickly, containers may not be for you. Similarly, some software are made to be run on an actual machine, even a virtual instance, and you may run into conflicts when you attempt to run it inside a container.
Containers also add startup time. If you're looking to have a workload run on a continuous basis, it may be more advantageous to scope the number of compute instances you require, then have these instances run continuously rather than rely on containers. Updates also work differently. If you have an actual instance running most of the time, software updates will typically be handled by scheduled tasks. With a container, the idea is to have a very small life spawn. That means the container gets recreated every time from the base image. So to upgrade anything, you have to recreate the base image which is more labor intensive. Many people forget or choose not to, and it's far more common to see containers running out of date software which may contain security holes.
Many people conflate containers with serverless computing, when they are very different things. If you run Python code in a serverless environment like AWS Lambda, the base system is handled by the cloud provider. That means your code runs on top of an operating system managed by a professional team of IT workers at the cloud provider, maintaining software updates and security. If you run your Python code in a container instead, then use a service like EKS, a hosted Kubernetes, the entire operating system is basically up to you to manage and maintain. This includes making sure the version of Linux running in your base image is up to date, along with any other software and libraries in there.
At the end of the day, containers can be adapted to most workloads. However there are times when it may not be practical or desirable to use them. It's up to you to decide when that is.