ALTERNATE UNIVERSE DEV

Serverless Chats

Episode #24: Serverless Application Security with Ory Segal (Part 2)

This is PART 2 of my conversation with Ory Segal. View PART 1.

About Ory Segal:

Ory Segal is a world-renowned expert in application security, with 20 years of experience in the field. Ory is the CTO and co-founder of PureSec (acquired by Palo Alto Networks), a start-up that enables organizations to build and maintain secure and reliable serverless applications. Prior to PureSec, Ory was Sr. Director of Threat Research at Akamai, were he led a team of top web security & big data researchers. Prior to Akamai, Ory worked at IBM as the Security Products Architect and Product Manager for the market leading application security solution IBM Security AppScan. Ory authored 20 patents in the field of application security, static analysis, dynamic analysis, threat reputation systems, etc. Ory is serving as an officer of the Web Application Security Consortium (WASC), he is a member of the W3C WebAppSec working group, and was an OWASP Israel board member.


Transcript:
Jeremy: All right. So let's move on to number four. So number four is over privileged function, permissions and roles. This is one of my favorites because I feel like this is something that people do wrong all the time because it's just easy to put a star permission.

Ory: Yeah. And this is an issue that I've been thinking about a lot of, why is it like from a psychological perspective that developers put a wild card there? So, obviously, we've talked about the very granular and very powerful IAM model in public clouds, and that's very relevant to serverless. You break your App down into functions, you assign each function you need to assign to each function, the permissions that it actually needs in order to do its task and nothing more than that, and that's the point here. How do you make sure that if somebody exploits the function, if somebody finds a problem in the function, they are not able to manipulate that function to maybe do some lateral movement inside your Cloud account, move to other data stores, etc?

So that's very important and we see that developers have a tendency, and this is one of the most common issues, to just use a wild card and allow the function to perform all of the actions on certain resources. And that, as I said, this is something that I've asked a lot of developers, why are they doing that? And I'm hearing different answers. Some are just lazy, I have to admit, I do that from time to time as well. It's much easier than actually having to go to the documentation and figure out the name, the exact name of the permission that I need. The other set of developers talked about future proofing the function. So they said, okay, now the function only puts items into database, but maybe next week I'll need it to read, which by the way violates the principle of single responsibility, but let's put that aside. And so they did just put maybe crude permissions or they put everything.

And then there are those who just either don't care, or don't know, or are not aware that this is a problem. So those are the three types of developers or answers that I've heard. But this is by far the most common and I've seen frameworks that automatically generates wildcards as well, which is also bad. And I've seen some bad examples as well in tutorials, which is the worst thing this can happen because we're trying to teach people how to write [crosstalk 00:48:21].

Jeremy: To go the other way. Yeah. Well, so the example that the document uses is the Dynamo DB star permission. And I love this example because you would think, okay, put items, get items, query items, delete items, that seems like that's what I'm giving it permission to. But, no, when you give Dynamo DB star permission, you're giving it the ability to delete tables, or change provision capacity, and you can do a lot of really bad stuff there. And obviously, this is all predicated on someone actually being able to get into your function, but that is something that is possible ... again, it's limited in how you can do that but it certainly is possible. We'll get into that more.

Just one point about about the permissions per function. One of the things that I like to do is I try to give each function the permissions that I think it needs, then I publish it to the cloud, and they try to run it. And then actually, AWS does a great job of giving you the error saying, this function doesn't have Dynamo: put item permission or something like that.

Ory: You're basically using debug branding to figure out the right permissions, right.

Jeremy: It works.

Ory: It's not nice. But they do have a service I think called, access advisor, and I think Google came out with a much better automated solution for that. Eventually, I think the access advisor would look at historical logs for, I don't know, a few days or a few executions and will tell you, it looks like you have too much permissions, you should probably reduce them. But this is something that we've done in Pure Sec with the there's the OpenSource list privilege plugin that we wrote that you can use for serverless which basically statically analyzes your code, extracts all the API calls and then maps them to the list required privileges, and will actually generate a list privilege role for you. So this is, and we'll talk about the future, but I think this is something that will eventually will have to be solved somehow or Cloud providers will probably produce better tools around that.

Jeremy: Well I think some frameworks are doing work with guard rails and stuff like that too that help a little bit but it's not quite there yet.

Ory: Mostly around the asterisks around the wild card, but actually-

Jeremy: What you actually need.

Ory: Exactly. Yeah.

Jeremy: All right, so let's move on to number five. And this is probably tied to number 10 in a way. So number five is inadequate function monitoring and logging. And I think this extends a little bit to number 10, which is improper exception handling and verbose error messaging. Whereas logging is a good thing [crosstalk 00:51:04] but can be a bad thing, right. So let's talk about inadequate function monitoring and logging first.

Ory: Well, I think looking at this issue now in hindsight and seeing that there's an entire industry of serverless monitoring vendors and solutions, I think, we already see that this is a real need. And it becomes more critical for security, not talking about performance and tracing and things like that, but being able to properly monitor your functions to log the right thing is critical. And if you look at this, for example, if somebody runs a sequel injection attack and triggers some exceptions, where would you even see that? I'm not even sure you would see that in the default logging facilities that the cloud providers give you.

So it goes back to the fact that developers have to worry about application security specific logging. And, yeah, so you have to write more into Cloud watch and if it's related to IAM and things like that, then you would see that in cloud trail, I'm talking AWS of course. But yeah, without that you're pretty much blind to the attacks that you're experiencing. Yeah.

Jeremy: And then you have the whole issue too, where is if it doesn't necessarily trigger an error and you're not capturing what the original input was, then how do you inspect that input? If you're not logging that input somewhere, it's not just getting logged automatically for you like an access logs. I mean, obviously, if you have API gateway setup you can enable access logs and you can see some of that. But even the post data and some of these other things are definitely hard to see.

Ory: That was never available. By the way, if you looked historically at Apache logs never logged the post bodies and for good reason. And I think from a PCI and data privacy perspective, you don't want them logged all the time, maybe just when there's a security exception.

Jeremy: Yeah.

Ory: It's the same for for serverless. I don't think you can actually enable full event logging end to end...

Jeremy: I think some of the observability tools let you though.

Ory: Yeah.

Jeremy: Which can be somewhat dangerous if it contains information that it probably shouldn't contain. But anyways. So, yeah, I totally agree though. I think that logging is important to make sure that you have visibility and certainly the monitoring tools are helping with this. All right. Okay. So let's move on to the next one because this one I think is probably the biggest security hole in any type of application, this is not specific to serverless, but certainly something that if it was taken advantage of, and again, this may be theoretical, it could cause pretty bad or pretty dangerous side effects. And that is number six which is the insecure third party dependencies.

Ory: Yeah, so I think I read somewhere recently that third party libraries make up, I think, 75% of the code we right today, is actually coming from external third party non-trustworthy resources. And that's, as you said, that's really not specific to serverless and this is also something that I mentioned when I give my server security talk is, this one is not specific to serverless. I think the only difference that I like thinking about when talking about the third party depends, how do you monitor those dependencies? Because you don't see them running, it's not running in your environment. If for some reason it's leaking data or sending your credentials or your API keys to some third party, you have no perimeter to block that from happening and you can't really monitor their behavior. You have to somehow run them locally and monitor their behavior. So the problem is, I think it becomes a bit harder to locate that you have infected third party dependencies.

I think another maybe thing to think about is that in serverless, you always reduce the amount of code in a function to minimum, as we talked about the principle of single responsibility. And so a lot of time you rely a lot on third party libraries to do some of the heavy lifting for you. I think you can't even write a serverless function without at least do one import, you have to import JSON for example, if you're talking about Python, right. So you start by already importing a library and then, frankly, nobody is monitoring these functions. So, I have a severe trust issues with open source projects. Nobody is keeping keeping us safe and if we're talking about all the the snakes and the white sources, they're very good in listing non vulnerabilities like CVE type vulnerabilities, but I think all of the all of the malicious packages or the malicious code that was injected into open source packages lately, was only discovered, I think I did some research, more than three weeks after it was injected. So there's nobody really monitoring this and there's no real solution to tell you that somebody injected malicious code in there. And that's an issue.

Again, not a serverless issue, but it becomes worse when you don't have any tools to control the environment?

Jeremy: Yeah. And I think this is the, we're going to talk about remote code execution, people can't just hack into your function, right, that's not a thing. It's not like a traditional server where you're going to see that happen. But remote code execution, I would say, third party dependencies is probably the number one overwhelming possibility or exploit where people could use remote code execution. And if you look at a lot of tools that people are writing, again, like you said, this applies to every type of service you're building. But certainly, you look at a lot of the tools that have been built, a lot of open source tools, they run sequel queries, right. So how would you even detect that this runs a sequel query? Maybe it's supposed to be running that delete, maybe it's supposed to be running that select scan on the Dynamo DB table. You'd have to really understand what each one of these dependencies is doing to know whether or not it's doing something it's not supposed to.

And obviously posting that data to some third party service is probably the the easiest way for these RCEs to get the data somewhere else, and that's something that, again, the cloud provider does provide some mitigation to if you're running into VPC, then you can disable outgoing HTTP calls. I know Pure Sec, did some work around that as well and that's part of the system there. But yeah, I just think this is one of those things where people say, oh, there's a service that does this or a plugin that does this, I'm just going to use that. And what you don't realize is that you may be installing hundreds of dependencies, each one of those maintained by different people of questionable intent or of questionable reputation. And we just do it because it's easy, but yeah, this one scares me the most and I think it's a very good practice for you to be very conscious of the third party dependencies in you use.

Ory: And if you remember, I think we talked earlier about my wife's WordPress [crosstalk 00:59:09], right, blog. Then WordPress itself, I think the amount of vulnerabilities found in WordPress is rather low but it's usually those third party plugins that you add that God knows who wrote them, and what's their background in application security. And so, the majority of holes in CVE's and exploits you see around WordPress, is because of those plugins, and I think the same applies to application security for modern applications and for serverless in particular where you can write the perfect application, and do threat modeling, and pen testing, and everything properly an input validation, but then you're using some de serialization package which includes a remote code execution, and that's it. That's the weakest link in the chain. So yeah...

Jeremy: And you may have followed every other best practice and it doesn't matter. Which I think ties into this idea which, besides the third party just being able to somehow execute a little bit of snippet of code, part of this is, again, for the purpose of maybe stealing application secrets, the tokens and the session tokens and things like that. So number seven is this idea of insecure application secret storage. So what's that about?

Ory: Well, all applications, most applications, use secrets, API keys, passwords, I don't know, whatever. And well, the security of those secrets really depends on where you store them. And I think we've seen again, and this started because of some bad examples and tutorials where people stored obviously secrets hard coded, which is the worst, and then they push the project to Git or something and it leaks out. And then later on people use the environment variables because that was the best or maybe the only way to store in the stateless world of serverless applications and people store them there, and those have a tendency to leak pretty quickly. And I have a good example of that in the presentation that I usually give.

And at some point, I think cloud providers decided to solve this and now they all I think offer secret storage that you should be using with some KMS, some key management, and you encrypt the secrets. That's all terrific and you should be using that. Keep in mind though that if somebody manages to run code, if you end up with an RCE with remote code execution, then those secrets are not secrets anymore. So that problem hasn't been solved yet. So if somebody can run code on behalf of your function through some RCE, then it's game over. I haven't seen a solution yet for that.

Jeremy: But I do think that environment variables are the more dangerous place to store those because it's very easy for a third party service to just look at the environment variables, that's a standard place to look. If you did store them, if you used accessing them at runtime, even cashing them in the function, but pulling them and storing them in global variables that weren't necessarily accessible through the environment variables that the attacker would have to do something a little bit more tricky, they'd actually have to redo some static file analysis to find where these variables are being used and then try to load them that way. So it might be harder if they're not stored in an environment variables anyways.

Ory: I would love to see cloud vendors make use of technologies like the Intel SGX secure enclaves technologies where you can store things in a very secure manner, and even if somebody has remote code execution, they won't be able to read them. We're not there yet but the technologies exist, I think at some point, they will make use of it. Yeah, I think we even said too much on this.

Jeremy: Yeah probably. But speaking of securing or having the cloud vendor do more, Microsoft Azure Functions actually just released something where now their secrets are available, but you don't have to do any code or have to add any code to access them. So, I don't know what that's all about. I have to look more into it, but that seems like an interesting approach. All right, so anyway, let's move on SAS eight. This is the denial of service and financial resource exhaustion.

Ory: So I don't think we need to spend time talking about denial of service specifically, I think there's some interesting aspect of denial of service especially in serverless environments where you can cause that financial resource exhaustion. So if you find the function that, with some input, will work harder and longer, you can definitely inflict some financial pain if you'd like. I think on the denial of service side, the only thing I want to say is that the auto-scaling and whatever infinite scaling or whatever people claim serverless platforms to be, you have to pay attention to that. You have to design with scalability in mind, it's not all for free. So you have to think about how you design the application, use the right services, think about what is invoking what, what are your timeouts, how your application is going to handle these things. Just because you're using Kinesis in the serverless function doesn't mean that you're infinitely scalable as an example.

Jeremy: Right. Yeah. And certainly flooding queues and things like that are other ways to raise the costs or to slow down the execution of other important messages. And that's things to where just good practices there are validating the shape of the data when it's coming in through something like API gateway or making sure that you're not flooding certain downstream services. But I agree, that's something that I think is well written about so there's lots of ways to find information there. Probably same thing here with SAS nine which is serverless function execution flow manipulation. So thoughts on that.

Ory: I think we changed the name later on to serverless business logic manipulation.

Jeremy: You did. Yes. Serverless Business Logic Manipulation.

Ory: But it's same thing. I think, again, this is not specific to serverless but rather to more service oriented architecture, web services, service mashes, etc. where, again, we're back to the fact that we broke the application down into many, many tiny laser focus services, and now they only interact with each other and who knows if something is even enforcing the flow that you expect it to do. And if you think about an application where you have an API request coming in, triggering a Lambda function that then put some data inside a bucket and that triggers an event that triggers another function that stores something inside IQ, which eventually triggers another function. At least from what I know, there is no way for me to enforce the order, the order in which the ... unless I'm using step functions and then it's a whole different game.

But if I'm building at the classic traditional way, if you can say traditional in serverless applications, but if I build it the normal way, nothing is actually promising me that the services are invoked in the right order. And that one service can, with 100% certainty, say that whoever invoked it was in fact the service that the it expects to be invoked.

So, as an example, you have the API triggering the Lambda function stores in a S3 bucket, and then that triggers the rest of the chain. What if some insider or a developer stores or throws the file into that bucket and starts the chain from the middle? Is something even validating that? And the default, when you create those applications with some framework or SAM or serverless framework, it's not that the default is deny all and then you loosen up the security permissions a bit. So, anyone with execution permissions can run any function in the account, and anyone with access to the bucket can drop files into the bucket. And so you have to think about that. And I think that in the future, at least, this will change and you will be able to create applications that the default is deny everything and then you say, okay, to this bucket, only that function can write and anyone else is blocked. And that's how I would want to see things evolve.

Jeremy: Yeah. Totally agree. All right, let's move on to number 10. So this is improper exception handling and verbose error messaging. And the reason why I compare this to number five, which was the inadequate logging, is because you have a tendency to log too little, but then you also potentially have the tendency to log too much sometimes.

Ory: Yeah, just like how you debug IAM permissions. We all do that. So we all do debug prints and then sometimes we leave them there, sometimes we don't catch the exceptions properly and that spills out just looking things like show down or Google you can do Google Dorking and find a lot of very juicy verbose error messages. And in serverless, I think because at least at the time of writing of the documents, the debugging capabilities that you had weren't on par with traditional applications where you're writing an IDE and you can debug line by line. There is a tendency that we see that people write verbose error messages and then leave them there. So that's how this is related to serverless specifically. But yeah, this is a classical error in any type of application.

Jeremy: Absolutely. AlL right, so this is actually somewhat specific more towards I guess cloud native, and this is number 11. This is legacy/unused functions in cloud resources.

Ory: Yeah, that's a good one actually. Just I think a couple of days ago, I was giving a presentation at the London serverless conference, and we had a booth and I was showing the serverless radar that we have, and somebody asked me, what there's more than, I think, few dozens of functions just lying there on the radar not related to the demo I'm giving? And I said, okay, I wrote these functions, I left them, some of them are two years old. I am not even deleting them. Why would I delete old functions? And they're just lying there with the IAM permissions waiting for somebody to access them, and invoke them, and exploit them.

So, I saw I think somebody published a serverless pruning plugin for serverless framework lately.

Jeremy: Yes.

Ory: Yeah. So I think in any Cloud account that I have, there are hundreds of functions just lying there, and I think last I checked was in most of my accounts, I had like 600 rolls that [inaudible 01:11:38]. Lambda execution role, one Lambda execution role, two and three, and nobody's pruning them and until you hit the limit, the account limit and then it will take you a few years before you start getting rid of unused resources and there's really no reason for them to stay there. Again, not serverless specific, serverless does bring a new set of resources that you can leave behind, so mostly functions.

Jeremy: Yeah.

Ory: Yeah.

Jeremy: I think that's one of those things I am the same exact way. I go in I look and I'm like, I don't even know what this function does, I don't know how it got published, I probably deleted the folder that actually published the actual app, locally that published it somehow. And I'm like, I don't even know how do you go and then trace everything? Maybe it wasn't even unpublished through cloud formation so now because it's like a test you are doing or something, so yeah, that is definitely something you want to clean up.

Ory: It took me I think three days to find a function. I had a function that scans the entire set of S3 buckets that we have publicly open and sends me an email every day. That was like an experiment that I have done I think more than a year ago, and I couldn't figure out even What account, what region, and what's the name of the function that was sending me the emails. And it took us I think three days or even more to eventually manage to find where it is by looking at the email hitters, figure out which AWS zone, not zone, but region it is, yes. And then, as you said, I completely deleted the project so it's not like I had something to delete the serverless framework project that I was using, yeah.

Jeremy: That's crazy. All right, so the next one is SAS 12. That's cross execution data persistence. And I think this is a really important one. I mean, obviously, in traditional applications, we have global variables that can get manipulated and are reused across each execution. But with serverless we think about, we're used to this single execution model, but we do save data outside of a handler which gives us the ability to reuse that on warm invocation. So what's the issue with that?

Ory: I think you explained it very well. People don't think about the fact that the same container is being reused. So it's frozen between executions but then it's revived, or defrosted, or whatever they call it, and the environment stays the same. So if you had anything stored in /temp, which is basically the only place where you can store things locally, or if you're storing things in the environment variables like session variables, it stays there. And in fact if the next execution belongs to a different user who's malicious, if again, you screwed up in one of the other top 12, the other 11 of the top 12, there is a chance that this information will leak.

And I don't think there is a way for you to automatically flush everything between executions. You have to code that. So delete or destroy what's in /TMP, throw away environment variables, and start from fresh, if you want. Again, that really depends on the application itself. But this is something that you have to keep in mind that stays there and we've seen a few examples, some demos where people store stuff in /campaign and then somebody comes in and grabs that data. Yeah, just something to keep in mind that ... and actually, this goes back to the theoretical versus practical. This is a classic one where we haven't seen this getting exploited, this is purely theoretical, we added it to the 2019 because we believe that this is something ... This is almost obvious that this is some kind of vector that attackers will target because that's pretty much what's left behind with Linux executions. So it only makes sense that somebody will eventually use that somehow.

Jeremy: Yeah. And actually, I mean, not even putting this in the security context but just putting it in the things you can do to step on your own toes from a security standpoint is, if you're not fully aware of what data you're saving in global variables from execution to execution, it's very possible that user A accesses this function and then user B gets that thought version of the function again on the next invocation. And if you've saved data about that user or some other information about that user that you might then share back, maybe you're a pending rose to an array or something like that. So these are things you can certainly basically make your own application and secure by just not knowing that this exists. So I do you think it's important. Yes, it's theoretical maybe from an exploitation standpoint, but certainly possible to do on your own if you're not paying attention.

All right, so I think that's a ... I mean, we went through I think in more depth than I was planning to. So we've been talking for quite some time and I hope that what people are taking away from this is just, again, not all of this stuff is specific to serverless, but it certainly is things just to pay attention to.

So, I want to wrap up though before I let you go. I do want to talk about the future of serverless security because I do see things changing quite a bit just from two years ago or a year ago in terms of how people are starting to address these things. And one of the things that I read recently was, this was a study, I don't know how valid it was but something like 63% of containers, so containers, not serverless, but containers run for less than 10 minutes and then they go away. So I think this nature of a femoral compute, even if it's containers, and it's Kubernetes, or Fargate or any of these other orchestration management systems that the time that containers are actually running to the time they get recycled is very, very low. So a ephemeral compute seems like, whether you're using containers or serverless, is the future of the cloud. So just what are your thoughts on that, and what does security look like next year or in the future?

Ory: Interesting. Wow.

Jeremy: Sorry, I packed a lot in there.

Ory: A lot of different ... No, that's good. That's good. It's a very interesting topic. Let's start with containers, and ephemeral, and server lists, and all of that. So, a container is a technology of how you package applications, existing applications that we had previous technologies like web apps and databases, you just package them as containers so it's easier for you to redeploy them. And so you can think of containers as a technology that actually came from maybe the operations side of the world and DevOps. That's a technology that comes to help them to more easily deploy and maintain infrastructure.

I think, on the other hand, serverless is something that is more geared toward developers. It's a technology, or an approach, or a platform, whatever you want to call it, that comes to help developers with getting rid of the need to actually maintain infrastructure and rely on IT teams, and it's much easier to now deploy applications and go into a very fast agile CICD deployment cycles where you push changes, you don't have to ask anyone, there is no gate, nothing is slowing you down here.

So, I don't think that looking at the fact that most containers run for less than 10 minutes means that we should all be migrating to serverless, that's not the right reason to do that. You might still run containers and feel comfortable with the way you package existing technologies, web servers, databases, etc. You might want to consider using services like Fargate where you don't own the infrastructure and you don't care about the underlying container orchestration. But the packaging is still more comfortable or easy for your teams to do through containers.

So, again, the running time, I don't think that's the right reason to go serverless. Again, if you have containers, they run for 30 seconds and then you destroy them. Maybe it's still easier for your ... you need some things that require you to package them as a contain. So that's just a comment about containers versus survivalists and which one is going to take over the world and that's the whole thing. About security and where this is all going, if I remember correctly, the discussion.

Jeremy: My convoluted question, yes. I mean, with the ephemeral compute nature of this, how does security apply differently to ephemeral compute versus what we've traditionally seen with long running resources?

Ory: Yeah, that's a discussion I actually don't like to do. I remember, in the early days of serverless, people talked about the fact that it's ephemeral, it's not staying there, you can't infect it, malware is irrelevant, all of those claims. You can't store anything there. I think we've already demonstrated that, that's not the case. You can infect serverless functions. Again, it depends on the vulnerability and the way you exploit it. But the fact that it's ephemeral, you can overcome that by reinfecting. So let's say you have a remote code execution through an HTTP API call, I can reinfect the function, it might not be the exact same instance, but again, it depends on what I'm trying to achieve.

On the other end, the likelihood of me using that to get into your network, that's pretty much done. So, yeah, the fact that those platforms are ephemeral, I think has its benefits and its drawbacks both from the attacker and the defender side. At this point, I don't think this is something that's worth paying too much attention on.

Jeremy: But I just I'm curious about the visibility of it, right. So I mean, I think that especially in being able to inspect log files and some of that other stuff, just the ephemeral nature of some of these compute models now just seems like it's probably harder for the defender to put all this information together and make sure that their application is secure.

Ory: Right. Actually, there's two sides to this issue. If you think about it, if you do logging properly, you're probably logging to cloud watch, which means that if you did IAM permissions properly, the function will be able to write but it won't be able to read from the logs and the ability of an attacker to destroy logs and cover their traces becomes less probable. Something that again, if you found a remote code execution in a web app, you can probably destroy the logs and nobody will be able to trace the actual attack events back. So in that sense, I think if you follow the book, if you write the logs properly, you do IAM properly, You're much better off.

Jeremy: Right. Now do you think this is something that cloud providers are going to put more emphasis on? I mean, this idea of application security, it seems a little bit outside their purview, but when they're managing so much of the underlying resources, it seems like they would want to do more in this space.

Ory: The simple answer is, no. I think cloud providers are now trying to get us all hooked on their cloud platforms. And so, they will build more features that are related to enabling more use cases. Obviously, they are doing some efforts around security but that's just enough so that people won't be scared of adopting these new technologies. And I think it's correct to leave the security aspects to the security vendors who are experts in this field and have the right personnel and experience, and it contributes to an ecosystem of vendors. You want your cloud to have a nice rich ecosystems of vendors and choices for customers to use different tools. So I think at the end of the day, they will do the minimum required, at least in the next few years and it's correct.

They now need to put an emphasis on how to get everybody on board. We want to see people adopt these cloud native environments. And so more tooling, the more debugging, more tracing, more logging, things like that, and not pay attention to runtime protection and things like that where obviously there are vendors that already have experience with that.

Jeremy: Sure. All right, so I've got one more question for you and this is around the containers versus serverless debate and it's not really about the debate because I think both of those things live in harmony and will continue to be used. And maybe I'm harping too much on the ephemeral aspect of this. But in terms of container security and serverless security. I mean, serverless security essentially is just going to be a container running somewhere that you probably don't know about, right, and maybe it's firecracker or something a little bit different or some of the more lightweight ones like the V8 in the cloud flare workers and some of that stuff. But is there going to be a big difference between how container security and serverless security are handled or eventually do you think there's going to be a merger of the two paradigms?

Ory: Interesting. Okay. So, first of all, regarding containers, I see in general two types of platforms that obviously the ones you manage yourself, like the traditional containers and the ones that are fully managed more like Fargates, where you run them in the cloud native environment. Actually, by the way, I don't consider containers to be cloud native. I have no idea what started this whole ... I know who coupled these. I think serverless is cloud native containers. If I run the container on my own host inside my network, it's not cloud native in any way. But I think serverless, and the Fargates of the world, and the fully managed public cloud container services, those are more similar in the sense that the majority of the backend heavy lifting work is being done by consuming cloud provider services like buckets, and databases, and etc. And there the security is different.

First of all, obviously, we talked about that, but you can't deploy anything other than a serverless security platform or whatever, cloud native security platform. You can't deploy agents and things like that. And you don't control the network. But more importantly, the network disappears in those cases. You no longer deal with network like layer three, four networking, and so firewalls and things like that are less relevant and everything is done through API's. So these cloud services, they offer you API's and then the control over who can access those API's is being dictated by IAM permissions. And so that's why people say, and I love that, that IAM is the new perimeter.

And basically the network layer security is pretty much dead. Yes, there are people who are running VPCs and things like that, but that's not the case. And if you look at the other types of containers, like the traditional where you package things as containers and you run the container, you still control the network and you can put a web application firewall, and you can put a next gen firewall there, and you can do network access controls and things like that, that's a completely different story. So, yes, some runtime behavioral protection logic can probably apply to both but the way you deploy it, the way you do it is completely different.

Jeremy: Awesome. Well, I think that certainly demonstrates why you are a senior distinguished research engineer with Serverless because, I mean, honestly, Ory thank you so much for having this conversation with me. I think, again, not to go back to the FUD thing, but I think this information is super important to get out. Should it scare people? No, I think it should just make people aware that serverless or security doesn't go away with serverless, right. It's not magical like it out of the box, it's way more secure than probably anything that more traditional that you would launch but there are still things you have to pay attention to, and many of these things now fall on the developer, and you don't have the benefits of some of those higher level Ops, Dev Ops people taking care of some of that protection and security for you, or the SecOps people.

So anyways, again, thank you so much for being here. If people want to find out more about you and actually more about Palo Alto Networks, the Prisma brand there, how do they do that?

Ory: Well, you can just type Prisma Security or Palo Alto Prisma in Google and you get to that page. I don't want to do any pitch here.

Jeremy: Feel free.

Ory: So just go into the website and see what we offer. It's like an end to end cloud security platform. I do want to say one more comment, and I think you summarize it well but the last thing we want is to get people scared. We want people to adopt serverless. I definitely think that serverless is what people imagine the cloud to be, which is why I love serverless, I would never think about using today and that would be my first choice for almost anything I build. There are just security things that you have to keep in mind, and there are some nuances that you have to keep in mind, which is why we published that document and why we do those presentations and conferences. But don't be scared. I think that's the one thing that we want people to do is to adopt more and more serverless.

Jeremy: Absolutely. All right. If people want to find you on Twitter.

Ory: At Ory, O-R-Y, Segal S-E-G-A-L, or just look for Mr. Serverless security, I'm just kidding.

Jeremy: That's going to be your new title. Awesome. All right. I will get all of that into the show notes. Thanks again Ory.

Ory: Thank you. Thank you very much for having me. It was awesome.

Episode source