Application Security , Cloud Security , Governance & Risk Management
The Challenges of Kubernetes and Cloud ContainersIBM's Nikki Robinson on Breaking Down Complex Environments in Order to Secure Them
The use of Kubernetes and cloud containers over traditional forms of storage and program development continue to grow. That's because Kubernetes and cloud containers make development easier for developers by separating the applications from the environments.
For Nikki Robinson of IBM, that separation is part of her definition of security, which she says is a way to "break down ... complicated environments into something that's tangible and easy to manage." Containers "consolidate configuration and management," she says, and they "help to ease administration for developers."
But Kubernetes and cloud containers are also an attack vector in many data breach incidents. Robinson stresses the need for proper configuration settings in a containerized environment and says a security engineer should be involved in setting up the environments from the beginning.
In this episode of "Cybersecurity Unplugged," Robinson also discusses:
- The need for a software supply chain security program;
- The challenges of securing APIs and patching vulnerabilities;
- The importance of resiliency.
Robinson is an expert in statistical data analysis, quantitative methods and risk management. She serves as a fellow at the Institute for Critical Infrastructure Technology, whose mission is to cultivate a cybersecurity renaissance that will improve the resilience of our nation's 16 critical infrastructure sectors, defend our democratic institutions and hopefully empower generations of global cybersecurity leaders. Robinson is also a board member for the FBI InfraGard Maryland chapter and is actively involved in several different cybersecurity industry organizations.
Steve King: Good day, everyone, this is Steve King. I'm the managing director at CyberTheory. Today, we're going to explore the world of Kubernetes and cloud containers with Dr. Nikki Robinson, an expert in statistical data analysis, quantitative methods and risk management. Currently a full time security architect with IBM. Nikki also serves as a fellow at the Institute for Critical Infrastructure Technology, whose mission is to cultivate a cybersecurity renaissance that will improve the resiliency of our nation's 16 critical infrastructure sectors, defend our democratic institutions and hopefully empower generations of global cybersecurity leaders. I'm all for that. I wish you the best in that regard. We definitely need it, Nikki. She holds multiple industry certifications, including the CISSP and CEH. She's a board member for the FBI InfraGard Maryland chapter and is actively involved in several different cybersecurity industry organizations. Welcome, Dr. Robinson. I'm glad you could join us today.
Dr. Nikki Robinson: Thank you so much for having me.
King: Let's jump right in here. Why do we insist on making our lives so complicated and difficult?
Robinson: Is that the whole question? I love it. It's funny, because this question is why I wanted to get into security. I started in IT operations, managing a virtualized environment, did some network engineering and, when I got into being a system owner, I realized how complex especially with integrating security and security practices and what I was doing, how complex the environment gets. You add GRC regulations, policies, procedures, tools, all these things. I'm not sure why we insist on making our lives so complicated. I guess I can't answer that question. But I can say, I think it's one of the reasons why I love being in security, I love the idea of trying to break down a lot of these complicated environments into something that's tangible and easy to manage.
King: Back in the day, when we were trying to figure out how to do all this stuff, the standard protocol with new systems was this is to sort of layer them on top of existing systems, or turn the old ones off in some way that would make them you know, inoperable, but still there, except that we didn't do a very good job of that. I don't think people do a very good job of that. I think we do that today, on a much grander scale. We do that with tools, in particular. Without getting too far down that rabbit hole, we have a very, very complex environment. I think that behavior hasn't changed in quite a few years. One of the things in my mind that immediately comes to mind is Kubernetes. Maybe for those that aren't familiar with containers and Kubernetes, maybe you can explain those technologies to our audience, and maybe why organizations seem to be increasingly adopting those technologies over traditional forms of storage and program development.
Robinson: I think containers, using something like Kubernetes, at the end of the day, I think the idea here is to make a couple of things right. One would be to make development easier for developers to separate the applications from the environments, from the OS levels, try to manage them separately, which helps increase the ability to develop faster and easier. As far as I think application development goes, it makes that area a lot easier. You could have one container, you can have multi-container applications. There's so many different configurations that you can do with it. The idea is that it's easy to use. It's not like standing up 40 servers from a template anymore. You're using containers and building these environments, ideally, limiting how many different machines or environments that you're logging into, how many applications that you're logging into. The idea is to consolidate configuration and management. So containers can be really powerful. They can in some ways make the environments more complex. But in some ways, especially if you are starting with a new environment, and you're standing up containerized environment versus you know, a virtual desktop environment or virtual servers, things like that, it can be easier to manage, especially if you have some sort of previous skill set or experience with it, it can be easy to set up and configure. I would say from why are we doing it sort of perspective, that's why. I think from an administration standpoint can be easier as well. But I think on the flip side of that, one of the other things to remember around using something like Kubernetes and containers, is that there are still a lot of the same principles that we have to consider. But the idea of how much time am I spending on administrative overhead for managing siloed servers or applications on servers and things like that, that's a different way of thinking about it. Containers help to ease administration for developers.
King: Not unlike so many things on paper, that makes complete sense. Why would you not want to separate those components out? Why would you want to replicate the same stuff you replicated the last time you wrote a system that looked very similar to the one you're writing today? The same argument is for APIs. Why would I want to write this for the sixth time when the next time I need whatever the function is, I'll just call it out of a library. Then that occurred famously to somebody and what 1996 or something, and the rest is history around there, as well. But these things take on lives of their own and open source is terrific on paper, but when it's not managed in the intended manner, it also has a tendency to expose itself to errors and mistakes that human beings typically kind of make. Maybe you have some commentary around that as well. I saw that recently, researchers said, in a report had found like 380,000 publicly exposed Kubernetes API servers, and it doesn't make sense. People like spinning these things up and just leaving them when they're no longer useful.
Robinson: Yeah, it's the same way I feel about this race to the cloud. We have these on-prem environments if we go to cloud environments, we move away from a physical data center to a cloud environment for the business. It makes more sense. It's potentially reducing cost, reducing management in house turning over, like if you don't have a dedicated security team, or a dedicated cloud engineering team, something like that, that you can still do application development using cloud systems without having to manage the infrastructure in the same way that there's so many benefits to moving to the cloud. When a lot of organizations started moving to the cloud, there were all of those exposed S3 buckets, tons of data breaches with misconfigurations around S3 buckets and I think it's very much the same way when it comes to Kubernetes containers, that there have been a lot of them stood up. I'll talk about API security in a second, because I think that's part of the conversation, but a different piece of this. But if you don't understand cloud security architecture, or like identity and access management principles, zero trust principles, those types of things, and how to apply them in a cloud environment, it's very much the same way that it would happen in a containerized environment that if the proper configuration settings aren't there, you're still potentially exposing these environments publicly. Even if there are some considerations up front without a proper investigation by security architects and engineers during the design and development phases, it can lead to these publicly exposed Kubernetes clusters or misconfigured containers, data breaches, all of those things because without the proper knowledge on how to secure them by design, that's where sort of all of that comes from. Then to speak on the API security portion, because I think API security in general, this is something I've gotten really interested in, especially in the last like, I would say, six to eight months, on what API security means. Because APIs in general make our lives easier - from a development standpoint, from an administrative standpoint, they are fantastic. We can use them for all kinds of things. But because they're so commonly used and used in lots of different ways, there is the possibility for human error, for them being set up insecurely, for them being integrated, or having dependencies that we're not aware of. API security has become a big issue, something that I'm certainly concerned about, because in the same way, as you're setting up these containers for these clustered environments, if you're just setting up APIs everywhere with API keys, and you're sharing them and using them in different ways, you don't sort of know the scope of what the potential risks might be unless there was some risk analysis done, or sort of like, "Hey, we're setting up this environment, we were going to have open and exposed APIs here, here and here, what do we need to know?" There's a OWASP Top 10. But in the last like year or two years, maybe a little longer than that, but they've been branching out into all these other different OWASP Top 10s and they have one on API security. If anyone hasn't checked it out and they're interested in what API security means, they have a fantastic list available to get an idea.
King: What do we do about that? If I agree with you that all of this begs the introduction of human behavior and human behavior is fraught with errors and mistakes, and confusion around something that looks right but isn't. What do we do about it?
Robinson: I think the biggest thing to do is if as a developer or an engineer, you're standing up these environments, and you don't have the expertise and security around these environments, which I think it's unfair to assume that everybody has to know everything about security, but then there should be a security engineer, or someone who has at least some experience with this involved in the architecture and design of whatever the application or development environment that you're creating, and building so that you can get those answers from the ground up and be involved. Have them be involved initially, or even when you get to a place if you've got this environment running for six months, and you're like, "Oh, I haven't had anyone look at this. Maybe I should have someone come by and look." If you're not an expert, have an expert come by and check it out. Because you don't know what you don't know.
King: Did much of your development experience was all cloud native? Or did you start life in on-prem a bit as well?
Robinson: I started life on-prem. My undergrad was IT and software engineering. I did a lot of did some C++, lot of SQL, a lot of Java. From the web development side, building and managing websites to a lot of CSS, so HTML. But started on-prem and then just in the last 5-6-7 years, dealing more with cloud environments as they've become more prevalent and everybody is looking to cloud solutions.
King: I've seen numbers that suggest that were as high as 98-99%, even new code that that's all reused code that comes out of repos. Is that really a good idea? One of the things that I find hard to believe, if anyone claims that they do is that this code is this reusable code is vetted. There's so many dependencies that the APIs themselves require, that are somewhere out on an attack surface somewhere. I don't think we've got a good handle on what gets called, what doesn't get called under what circumstances - what does and what doesn't - let alone that the actual functionality of the API that's being called by the API that you're using in your code.
Robinson: I think it's absolutely a big challenge right now, because it's the way technology goes. We adopt new methods and new ways of doing things because it makes development easier. But then security comes along after and it's like, "Oh, hey, hold on a second that we were actually a little concerned about this." It is difficult, I think at this point to know what dependencies exist, what third parties, now we're talking about fourth and fifth-level parties that are involved. Once you add that much different dependencies, and then different groups that you may not even know who's managing things. I was reading an article today about how lots of bad actors, malicious actors out there are offering to maintain code, they're offering to maintain these open-source libraries. They're injecting malware or other malicious code into these open-source libraries. That's pretty scary. I think it's definitely a big concern, when we're talking about adopting open-source software, and what does open-source software security mean, to me. I think you have to assume and I'm in security, so I am a self-proclaimed doom and gloom expert. But I think you have to assume at some point in the amount of dependencies that you have, or in the amount of libraries that you that you may be using, that there may be malicious code in there. You have to assume that it's there. Then you can manage risk from there.
King: Is that new service called malware as a service. Is that how that works?
Robinson: Malware as a service, ransomware as a service, it's all out there.
King: That's great. Let's talk about cloud, for example. As long as we're talking about complexity here, the rush to the cloud makes sense for all the right reasons. Hybrid cloud makes sense for those right reasons. But when we do it, the doing of it is not simple any more than, well, let's take container technology and not just isolate Kubernetes, because I think Kubernetes was designed by a bunch of folks at Google. One that had a little internal contest that said, "Look at how smart I am." Because I'm going to build some shit here that nobody else can figure out. But on the container in cloud side, both of those are complex technologies. It's where a lot of breaches occur. In terms of that, that level of complexity, do you have any thoughts about, how we get our hands around the software supply chain as it relates to either or both?
Robinson: The biggest thing is awareness, but I think too many times, we want to know, what is the fastest way I can get this done, and most of the time, it's probably because there's dollars involved - either you're supporting a customer or you're developing a solution, or whatever it might be. But I think the real challenge to organizations and to cloud environments, in general, is what level of support do you need. If you're trying to save money, then you're not going to have as much infrastructure support that's managed by whatever cloud provider it is. But if you are not trying to save money, but you know that you don't have the expertise or the skill set in your environment to manage the security, like the patching, or the sort of the vulnerability management basic components of the environment, you can turn that over that infrastructure over to somebody else and let them do that. I think that's like one of the biggest thoughts when it comes to cloud specifically is if you're trying to save money by going to the cloud, you may not be saving on security. There are a lot of free security tools. I would say get educated on those and know what is available. But you've got to break that down to what's going to be applicable to your environment and what skill set you have available to manage those environments. When it comes to open-source software, software supply chain security, I think a lot of organizations are going to have to have a software supply chain security program. You're going to have to have it especially if you are heavy into development, and building solutions, building tools, or leveraging open-source software, which most people are. I think having a good program in place is going to be important to making sure that things like Log4j don't have as much of an impact. That's why I say one of the biggest things you can do is sort of have an incident response plan. Just assume that something is going to happen. If you want to leverage this technology, that's fine. It's understandable. But if you can have at least an IR plan in place, understand what you're going to do when something happens then you can recover a lot faster and not have it impede you as much. There's certainly mechanisms around it. Not to say don't use it at all, but if you're going to use it -one, be careful and two, just be prepared when something goes.
King: It doesn't appear to me and that just from maybe my vantage point, but this is my business and has been for a long time and I just don't see a lot of people either, you know developing, testing, involved with their IRPs and I wonder why that is. It's common knowledge and it's like, "Oh, if you're going to do that you need an incident response plan." Great. We all understand that. Why don't we do it?
Robinson: It's a great question. Again circling back to your first question is, why do we make this so difficult? It's like we have the tools in place, we can do this. I think the biggest thing is, and I can only speak for smaller organizations or mid-sized organizations, how big are their security programs? How much do they have to spend on a program? Or how much are they leveraging. To put a good incident response program in place is essential. But I don't know that it's sort of gets as big of a play. It's not like a exciting topic to talk about your incident response plan and your communication plan and all that. But, threat hunting is exciting, pen testing is exciting, red teaming, like all these things are the exciting areas of security. But the building and the resiliency isn't always I think the first thing that people think of and that's not to say that people don't but I think you're right. I think that IR plans need to be not just in place, but then tested frequently. To get a little sidetracked but I think this is why the principles of like chaos engineering and chaos, security engineering are so interesting, because you take that IR plan, that reactive approach into this proactive approach, where you're testing your environment consistently, to make sure that your security configurations are in place and working as expected. That way, you're not waiting for something to happen, you are actively testing your environment.
King: Patching and just general hygiene aren't very exciting, either. But they seem to be the cause of most of our breaches. Something's got to give here. You mentioned chaos engineering. Am I mistaken or did you teach chaos engineering at some point?
Robinson: I've done a few talks on chaos engineering and chaos security engineering principles. I got interested in this like two years ago, because I was looking into chaos engineering, and what they did at Netflix and a couple of the other big organizations. I love the hypothesis-based sort of scientific method approach of chaos engineering. It sounds like, "oh, chaos engineering, I'm just going to break stuff." But it is this methodical approach. I have a hypothesis, I believe that if I change x, that y will occur. Then you can test it, and then just see if it works. You have this iterative approach to "I think this might break, let's see, if it breaks."
King: It appears to me to be a much more rigorous approach than some random pen testing, or red team activity as well, which is appeals, because then you've got some audit trail to this is where we started and this is where we ended up and, here are the four places that we need to do something about. I asked that because as you're probably aware, we're building out our CyberEd.io training program here at ISMG. It's one of the things that I'm involved with, and I would invite you if you're interested in putting together two or three-module course on chaos engineering, and how you put that together and what your expectations might be around that, that we'd be delighted to work with you on.
Robinson: We'll have to talk about that.
King: For those that are looking to learn more about Kubernetes, and container security, do you have any recommended resources, aside from our training?
Robinson: I highly suggest there are a lot of good resources out there. Specifically, we're talking about container security and cloud security principles. I know the Linux Foundation has some cloud security type stuff. Udemy has some good stuff too. But there is a lot of good open-source information out there. I highly suggest, especially if you're interested in security from like a pen testing angle or getting an idea of how would an attacker get into a system and maybe why there's hack the box. That's a great resource. Then I have to because he wrote the book on security chaos engineering by Aaron Reinhart. I highly suggest picking up that book too. That's a great resource if you're interested in chaos, security engineering principles, and how they might apply to taking incident response into a proactive view. That's another really good resource.
King: Great. Thank you. The last question that I have is around vulnerability management. I wonder why it's still so difficult for companies that it's obviously why it's important to patch and mitigate end of life software and all of those issues. Is vulnerability scanning still a major component to secure a network, and if you had a continuous monitoring program, and what are the current issues around patching end of life?
Robinson: I think one of the biggest problems is that there are so many vulnerabilities released every day. Just from a mental standpoint, when you're looking at a vulnerability scanning tool or a report that you're getting on vulnerabilities, it can be overwhelming - just looking at how many vulnerabilities exist in the environment. I think it's difficult because we, as an industry make vulnerability reporting complicated. Instead of saying, "Hey, you have these 10 assets, they're the most vulnerable, patch these first. Get those done. Okay, now, we need to focus on the next, the next, the next," and making it this iterative approach. But I think it can be time consuming. Patch management can be time consuming. If you're running VMs, it's so much easier, you just update the base template, and then push it out to all your VMs versus I have this many servers, they're not based on template. Now I've got a patch each one, I don't have a patch management strategy. I think those are a lot of the basics that sometimes get overlooked, which is (a) I need a patch management strategy, and (b) how do I automate that patch management strategy and take as much of the manual overhead off of me and automate as much as possible. I think those are two big things that can impact an organization positively. But I think it's just still difficult because people have gotten into this big tech debt space, you know, with so many old applications, so many old are end of life operating systems that they're using. Because they feel like, "well, I'm supporting this environment, I have to use it, I can't get off of it." I think if we can change that mindset, and say, "if you want to move to the cloud, or if you are thinking about moving to the cloud, or moving from cloud to containers, you can use that as a good opportunity to remove a lot of the tech debt, to remove a lot of the vulnerabilities and start with updated software/updated libraries/update all those things." But that would have to be security by design in your new environment. I think right now, it's difficult, because a lot of people are still in these hybrid situations. They've got some on-prem, they've got some cloud, or maybe they have hybrid cloud. We've made our environments so big and complex that vulnerability management becomes challenging.
King: That last statement is that after my own heart, I'm convinced that complexity is our biggest problem in our insistence upon looking the other way. That we don't want to deal with it, who would? It's just enormous, the size of the problem. But I think from a lecture notes point of view, your last set of comments about vulnerabilities was terrific and a great way to end our session. I hope that we can come back together again in a few months and talk some more about this, because this problem is not going away. It's getting worse every day. Our adversaries don't share a lot of these problems. It weakens our overall ability to compete as well and a whole bunch of other things that we don't have time to cover right now. But I do appreciate you taking time out from your schedule to join us and help us understand some of the more esoteric parts of the cybersecurity puzzle today.
Robinson: Absolutely. Thank you for having me.
King: I will look forward to having you again soon. Thank you to our audience for spending a half an hour with us today. Hopefully it was useful to you as well. With that, take care and we'll talk to you again soon.