Containers are here to stay. Count on that fact as we move into 2017.

Their benefits are undeniable. But while containers were once promoted as our savior for portability and ease of development, in 2017 enterprises will be faced with some real choices and real obstacles. It’s time to understand what role containers play, what works and what doesn’t, and step-wise, how to properly leverage containers. As we exit the party stage—meaning the hype period—for containers, it’s now time to grapple with the reality of getting this stuff going in 2017.

This article synopsizes my recent reports on containers on TechBeacon and elsewhere, and goes into greater detail than might fit in a summarized report. The end goal is to produce a deeper dive into containers, with a look at how container use may develop in 2017. If you need a container tutorial, please look elsewhere. This is about where container technology is going and how to make money with it in 2017.

As you can see from the table, most container promises have been kept—a far better result than most technologies achieve! Containers do indeed deliver portability, and with cluster managers, they can scale and provide enterprise-level performance. That said, they typically are not a good fit for legacy applications, which almost always need some major surgery before they can be “containerized.”

Application containerization in 2017

So, the larger question is this: Will containers provide the ability to modernize legacy applications in 2017?

To answer that question, review the table below, which considers several features that are needed from the three main types of approaches to moving applications to the cloud:

  • Moving to containers
  • Refactoring (rewriting some or all of the application for the target cloud)
  • Lift and shift (moving the application without any modifications)

Within those approaches, we consider the following disruptive vectors, including everything that is listed in this table.

Table 1: The disruptive vectors, with weighting.

This table assigns weighting based on what most enterprises consider important, but your organization could be a bit different. Adjust accordingly. Weighting allows us to compare and contrast these vectors with the three types of application migrations, including containerization.

Table 2: Providing a ranking based on enterprise use.

The rankings are based on the weighting above using the disruptive vectors to provide a score for each vector:

  • Code and data portability means the ability to port the code and data from source to target platform, as they exist within containers, or not. If done correctly, the containers should be able to move from platform to platform without modifying the code or structure of the data.
  • Cloud native features means that we can leverage the native features and functions of the host cloud platform. In this case, refactoring is the stronger approach, considering we write the application to leverage those specific features.
  • Application and data performance is really pretty much the same between containers and refactoring, but very poor with lift and shift, considering that we’re doing no optimization for performance on the target platform or cloud.
  • Use of services also is very strong with containers and refactoring, considering that we’re dealing with the native cloud services.

This is pretty much the same issue for governance and security. Again, both containers and refactoring are closer to the host platform, and thus can leverage the security and governance services.

Finally, business agility lets the organization make changes and expansions easily. It applies to containers in the sense that, once they are built, we should be able to allow them to scale or change the platforms that they run on.

Of course, the biggest issue that most enterprises consider is cost. The cost is pretty much the same for building applications that leverage containers (such as Docker) and for refactoring. Both are invasive, and thus they both bring cost and risk to the equation.

The legacy question

Everything being equal, how well do containers work and play with legacy applications? The answer can be heavily affected by the application platform.

For traditional mainframe applications, the short answer is that containers are almost never a fit, unless they are rewritten using more current programming languages. Of course, this takes into account that the systems typically are more than 20 years old. For these types of workloads, it’s better to leave them where they are. They are not candidates for any of the approaches we profiled above, including containers.

For databases and applications written in Java, Python, C++, and other more contemporary languages, consider their core characteristics, such as:

  • How was the application designed? If the application was created to be distributed, and most of the components are easy to separate, then they could be good candidates for containers.
  • How does it use data? If the database is tightly coupled with the core applications and is difficult to separate, then it’s an unlikely candidate for containers.
  • Which enabling technology does the application use? If the application leverages proprietary languages, databases, middleware, or other enabling technology that is not well supported by containers, then it’s unlikely a candidate for containers.

What does this all mean? Keeping the constraints listed above in mind, the general conclusion is that few legacy applications are good candidates for containers. While this rule of thumb has many exceptions, and you might produce hundreds of legacy applications that are good candidates for containers, this general conclusion is not wrong.

In contrast, containers are almost always a good idea when applications are being built from the ground up, and they’re often being built with containers in mind from the outset. There are a few reasons for this course of action:

  • Developers can design the applications to leverage containers. This means distribution of core components that are easily placed into containers, including data. This green-field approach means that the developers can optimize the application for containers and thus increase the chance for success.
  • New applications can be built with a mind toward portability. Many legacy applications have built-in native platform API calls. This means that they are making calls to the host platforms, and that limits the ability to move those applications. When developers approach net-new applications, these types of limits can be avoided.

Integrating containers and DevOps

In 2017, containers and DevOps need to work together. Most people building DevOps organizations and DevOps automation systems are considering how they will build containerized applications within those processes. The reality is that, if you do a good job of building a DevOps organization, then containers are just another enabling technology to deal with. You just need to consider how the containers should work with continuous integration, continuous testing, continuous deployment, and so on.

One thing that needs to be front and center is the fact that containers need to deploy. In some cases, they need to deploy to container clusters, which are managed by cluster managers. You may need changes in terms of configuring the cluster managers properly on the fly so that the containers can go through updates, including improvements and bug fixes.

Testing can be a challenge as well. This includes testing for portability of the containers that are moving through the DevOps pipeline. Typically, this means that you test for (and fix) API calls that are specific to a platform. If uncorrected, everyone understands the trade-off.

This brings up a new issue: Container-based applications seem to be moving in proprietary directions—and therefore less portable ones. In 2017, we’ll see container providers offer new features that go beyond what’s considered open container standards. While they may support a standard, such as Docker, providers will each offer their own features. That means you may have some enhanced capabilities, such as a better approach for database access, but they may limit your ability to port the application to other container technologies.

This trade-off is being argued in many development shops, no matter if they are moving to DevOps or not. As the container space becomes more heated in 2017, we’ll see more container technology providers move in proprietary directions. The objective is to differentiate their technology from others that leverage the same base standards. How enterprise development shops approach this trade-off will lead enterprises in different directions when they leverage containers, such as, “What is best versus what is portable?”

Find your own path

Are containers right for your organization? That’s the core question being asked, beyond “What is the state of containers?” A self-assessment of business objectives can help an organization decide if this is the right path, whereupon you can figure out the enabling technology that best meets your objectives. And the answer could include containers.

In 2017, containers will address a few core business concerns:

  • If you’re likely to move core applications and processing from platform to platform or cloud to cloud, then containers hold the most promise. While there are trade-offs to consider, containers do provide portability. However, generally speaking, this benefit best applies to net-new, not legacy, applications.
  • Independent software vendors and other technology providers that are moving their technology to the cloud will find that containers are a sound option. They will bet the business on the technology, and making use of the portability and scalability features of containers is almost always a good idea. Note this is the case no matter if you’re considering net-new or legacy applications.
  • If you’re moving to DevOps, then containers are likely a good choice for enabling technology, but again, typically best for net-new applications. Containers and DevOps technology providers have been working and playing well together for several years, and thus they are likely to work well for you. Again, legacy applications are not likely contenders for success with containers.
  • If cloud computing is in your future (and for most that is true) then containers should always be a consideration. All major cloud providers support containers, including Amazon Web Services, Google, and Microsoft. However, they do so in different ways. Containers allow you to move applications between the providers with little or no effort, or at least that’s the way it’s being sold. We’ll hear about things going well around portability, but also things going not so well.

In many cases, the answer to all those questions are, “It depends.” It depends on the business objectives of the business, the existing applications, and how much risk the business is comfortable with.

Of course, many enterprises could ignore containers altogether and, in doing so, save money. That takes into account the terms of changing skill sets and the cost of the technology. However, they may not find the value that they need in the cloud or on other new platforms. Containers may address systemic problems that you can solve now or solve later at a much greater cost.

In 2017, we know a few things will be true. First, containers are here to stay and have proved their value. However, they don’t work for everyone in every way, and your mileage will vary—a lot. Second, there will be as many success stories as there will be failure stories as we learn more about containers’ capabilities and limitations.