ContainerContainerized Application Development

Development and creation of containerized applications

Regardless of whether it is a new software development or an existing legacy application; There is hardly a way around containers. Because both during development, rollout and distribution as well as operation over the entire software lifecycle, there are many advantages through their use.

However, the integration and adaptation to existing processes harbors some pitfalls. We are happy to help you with our know-how in transferring existing applications into containers or adapting the container technologies to your software development processes.

Contact

Michael Lötzsch
CEOMichael Lötzsch+49 351 4400 8114+49 151 6243 2605mloetzsch (at) proficom.de

Containerized Application Development

The software industry has seen rapid growth in recent years. Ever new, faster processors and an increasingly networked environment are challenging companies. The advent of cloud computing and IoT devices enabled new approaches to software development. Companies are adapting these approaches and responding to the challenge with microservices, DevSecOps and continuous integration in order to meet user expectations. In this context, more and more software manufacturers are developing their applications container-based (cloud-native) or relying entirely on serverless cloud computing.

But what is the best way to start implementing containers in our own development and then do we have to reprogram everything?

Indeed, the transition from traditional legacy applications to container-based microservices poses major challenges for companies. Not least because the infrastructure required for container operation must first be provided and properly configured, as this represents the standardized substructure on which container platforms are based. You can use platforms such as Kubernetes, Rancher or OpenShift for this purpose. In the best case, however, the principle of Infrastructure as Code is used and then the required infrastructure is automated and documented in a few scripts and can then be scaled upwards in the cloud as required and in a few simple steps.

After the container management platform is up and running, the application can be migrated.

There are three conceivable scenarios: complete container-based or "serverless" new development, migration of the legacy application with only the most necessary changes, or the middle way, refactoring, in which the "old" application is distributed so that individual services are in containers can be mapped.

Advantages of containerization
Icon
Resource conservation and cost reduction

Containers consume fewer resources than VMs, which means that the resources that have become free can be used for other purposes or the costs for operating the application can be reduced.

Icon
Scalability and portability

If implemented correctly, container-based applications can be scaled and ported quickly and easily - regardless of the container management platform or the cloud used.

Icon
Automation

Containers can be automated very well and, depending on the application, can be started, stopped or increased.

Icon
Speed ​​when rolling out new features through CI/CD

With continuous integration, an unprecedented speed can be achieved when rolling out new features. There is no “big” release once a quarter, but hundreds of smaller releases every day with new features that can be automatically tested and rolled out.

 

Icon
Easier failure analysis

The devil is often in the details. Monitoring and logging can provide information about the behavior of the application. You can see exactly which module of the application has a problem with what and can fix it quickly.

Icon
Security by isolating old, unsafe code

Migrating legacy applications can isolate old, potentially unsafe code in containers, reducing the risk of using the application for potential outside attacks.

Cloud Native Journey

Packing your application into a container and then running it sounds easy. But in fact, a journey is hidden behind it, at the end of which there are many points to consider. In order for the deployment and operation of applications in a microservice architecture to run smoothly, not only the structure but also the operation of the container platform plays an important role.

Icon

Containerization

This means “packing” the application in container images. Usually this is done with Docker or similar tools. The application itself and all its dependencies such as binary files or images of the runtime environments end up in the container. There are almost no limits to the type of application. It is recommended to separate functional parts of the application from one another and operate them as so-called microservices in their own containers. This process can also be implemented gradually.

Icon

Orchestration and application definition

In a corporate environment, software has to meet the requirements for high availability, scalability and security more than ever. In order to be able to implement this, there are mature and proven automation and orchestration solutions such as Kubernetes or Helm. These enable the containerized applications to be operated in clusters, which manage the application containers independently.

Icon

CI/CD 

The integration of the containerization process in a continuous integration / continuous delivery pipeline enables changes to the code of the application to automatically lead to a new container image, which is built, tested and then rolled out. This results in great advantages in terms of application quality, security, speed of reaction to new requirements and efficiency.

Icon

Monitoring and analysis

Logging, monitoring and analysis tools are essential to maintain an overview of all applications and messages and to be able to react to developments in good time. With Prometheus, Fluentd, Kibana or Elasticsearch, there are powerful and, moreover, free solutions that support this task.

Icon

Service Proxy, Discovery and Mesh

Modern applications are often developed and operated according to the microservice principle, in which the individual functionalities of an application are outsourced to separate containers. This leads to new requirements, for example for service discovery, so that the individual microservices can be linked with one another. In addition, many other features can be used such as health checking, routing, authentication or load balancing. 

Icon

Networking, Policies and Security

While software-defined networking has been announced in the data center environment for years, it has long been a reality with the use of Kubernetes. Granular rights management using policies or the connection of various identity providers (LDAP, OpenIDConnect etc.) for authentication are also supported.

Icon

Distributed storage

Block storage, file storage, object storage or databases; Modern applications pose new challenges for operations when it comes to storing data. To keep track of things, there are many different helpers available, such as Rook, a tool for orchestrating various storage solutions.

Icon

Container registries and software distribution

After an application has been containerized, the resulting container image must be stored in one place. Be it to be saved for later access or to be transferred to stage or production. In addition, there is a need to protect container images against manipulation by third parties and, if necessary, to search them for security gaps. These requirements are met by what are known as container registries, which both serve as storage for the images and can also perform security scans. Proven solutions would be Harbor or Quay in combination with the Clair scanner, for example.

Any questions?

We are happy to provide you with know-how, specific support services and associated license and support offers.

Background Image Mobile Version