ALTERNATE UNIVERSE DEV

Serverless Chats

Episode #95: Going Serverless with IBM Cloud Code Engine with Jason McGee

About Jason McGee

Jason McGee, IBM Fellow, is VP and CTO at IBM Cloud Platform. Jason is currently responsible for technical strategy and architecture for all of IBM’s Cloud Platform, across public, dedicated, and local delivery models. Previously Jason has served as CTO of Cloud Foundation Services, Chief Architect of PureApplication System, WebSphere Extended Deployment, WebSphere sMash, and WebSphere Application Server on distributed platforms.  

Watch this episode on YouTube: https://youtu.be/yH_mgW2kGzU

This episode sponsored by IBM Cloud.

Transcript:
Jeremy: Hi, everyone. I'm Jeremy Daly and this is Serverless Chats. Today I'm joined by Jason McGee. Hey Jason, thanks for joining me.

Jason: Thanks for having me.

Jeremy: So you are an IBM fellow and the VP and CTO of the IBM Cloud platform. So I'd love it if you could tell our guests a little bit about yourself and what it is that you do at IBM.

Jason: Sure. I spend my day at IBM worried about developers and platform services on our public cloud. So I'm responsible for both the technical strategy and the delivery of our Kubernetes and OpenShift platforms, our serverless environments, and kind of all the things that surround that space, logging, and monitoring and other developer tools that kind of make up the developer platform for IBM Cloud.

Jeremy: And what about yourself? What's your background?

Jason: Been a software, kind of middleware guy, my whole life. I used to be the chief architect for WebSphere app server. So I spent the last 20 plus years working on enterprise application platforms and helping companies be able to build mission-critical business systems.

Jeremy: Awesome. So I had Michael Behrendt on the show not too long ago and it was great. We talked about a whole bunch of different things. IBM's point of view of serverless. We talked a little bit about the future of serverless and we talked about the IBM Cloud Code Engine, which I want to get into, but for the benefit of our listeners and just because I'm so fascinated by some of the things that IBM is doing now with serverless, it's just super interesting. So could you sort of give me your point of view or IBM's point of view on serverless and just sort of refresh the listener's memory sort of about how IBM is thinking about serverless and how they're probably thinking about it maybe differently than some of the other cloud providers?

Jason: Yeah, sure. I mean, it's such a fascinating space and it's really changed a lot, I think, over the last five years or so from its kind of maybe beginnings in being very aligned with serverless functions and kind of event-driven computing and becoming a more general concept about how developers especially can consume cloud platforms. I think if you look at the IBM perspective on serverless, there's a couple layers to the problem that we think about. First is we've been pretty clear that we think Kubernetes and distributions of Kubernetes like OpenShift are kind of the key foundation compute environment for developers to use going forward. And we've done a ton of work in kind of building out our Kubernetes and OpenShift platforms and delivering them as a service on our public cloud. And that's an incredibly flexible platform that you can really build any kind of application. I think over the last five years, we've proven we can run anything on Kubernetes databases and AI and stateless apps and whatever you want.

Jeremy: Right.

Jason: So very, very flexible. However, sometimes flexible also means complicated and it means that there's lots to manage and there's lots of concepts to get your head around. And so we've been thinking a lot about, well, how do you actually consume a platform like Kubernetes more easily? How does the developer stay more focused on what they're really trying to do, which is like build application logic, solve problems? Now they don't really want to stand up coop clusters and configure security policies. They just want to write code and run code and they want to get the power of cloud to do that. Right? And so I think serverless has kind of morphed to be, for us, more about the experience that we can build on top of that container platform that's more oriented around how developers get work done and allows them to kind of more easily take advantage of the scale and power of public clouds without having to kind of take on the burden of a lot of that kind of work and management.

And so the work that we've been doing is really aligned in that direction, that we've been working in projects like Knative, in the open source community to build simpler abstractions on top of Kubernetes. And we've been starting to deliver those in our cloud through things like Code Engine.

Jeremy: Yeah. And I think that's interesting too because I always have, this is probably the wrong way to say it, but it's sort of a chip on my shoulder about Kubernetes because it just got so complicated. Right? It's just so many things that you have to do, so hard to manage. And as a serverless guy myself, I love just the simplicity of being able to write some code and just get it out there, have it auto scale, tie into all those events. So I think that a lot of cloud providers have sort of moved that way to say like, "Well, we're going to manage your Kubernetes cluster for you." Right? Which essentially is just, I think moving backwards, but also moving forwards at the same time, if that makes sense. But so in terms of the use cases that this opens up because now you're not necessarily limited to a sort of bespoke implementation of some serverless platform, you have a lot more capabilities. So what types of use cases does this open up?

Jason: Yeah. I mean, I may have a couple of comments on that. I mean, so I think with Kubernetes, you have the complexity of managing the Kubernetes environment, but even if that's totally taken care of for you, and even if you're using a managed Kubernetes service like the things we offer on IBM Cloud, you still have that kind of resource burden of using Kubernetes. You have services and pods and replica sets and namespaces and all kinds of concepts that you have to kind of wrap your head around and know how to use in the right way. And so there's a value in like, "Can we abstract that? Can we move away from that?" And it's not like this idea hasn't been tried before. I mean, we've had paths platforms, like kind of Cloud Foundry style, Heroku, very opinionated paths environments in the past and they definitely simplify the user experience. However, they came with this negative, which is if you don't fit within the box of the opinion ...

Jeremy: Right.

Jason: ... then you can't do what you want to do. And the cost of going outside the box was super high. Maybe you had to completely switched platforms. You were completely blocked. You to switch to some other approach. And so part of what's informing us and as we think about this is how do you have more of a continuum? You have a simple model. It's aligned around what you're doing. Just run my source code, just run my container image. I want to run a batch job, but it's all running on one platform. They're running next to each other. You can drop down a layer into Kubernetes if you want to. If what you're trying to accomplish needs some of that flexibility, you should have access to it without having to kind of start over. And so that's kind of how we've approached the problem a little bit differently is bringing this all together into kind of one unified serverless environment on top of Kubernetes.

And that lets us handle different use cases. That lets those handle kind of stateless, data processing and functions. That lets us handle simple web apps. That lets us handle very data-intensive, high-scale computation and data processing, async processing like batch all in one combined way.

Jeremy: Right. Yeah. And I think it's interesting because there are artificial limitations may be put in place sometimes on serverless platforms. If you think about AWS Lambda, for example, you get 15 minutes of compute and they bumped things up. So now, and again, I've just sort of grew up in the AWS environment, but they have things like 10 gigs for a function or something like that. And so they've increased these things, but they are sort of artificial limits that I think, depending on the type of workload that you're doing, they can really get in your way, especially if, like you said, you're doing these data-intensive things. So from an IBM perspective, I mean that's sort of gone, right?

Jason: Right. Exactly. That's a great, very concrete way to look at the problem. The approaches that have been taken in some of the other cloud environments is these different use cases like serverless functions, single containers, batch processing, they're different services. And every service has its own kind of limitations or rules about what you can and cannot do. How long your thing can execute, how big your code can be, how much data you can transfer. We've taken a different approach to say, "Let's eliminate all those limits and let's have one logical service, one environment that supports all those styles." We can still expose a simplified kind of consumption model for the developer like just give me your source code or just give me your image, but I can run it in a way that doesn't have those computational limits, and therefore I can do more. Right? I can run more kinds of workloads. I don't run up against some of those walls that kind of stopped me from getting my work done.

Jeremy: Right. Right. Yeah. And I like that approach too because I'm a big fan of managed services. I think that if you have a service that does image recognition for you, that's great. And do you have a service that does queuing for you? That's great. But in some cases, you start stringing together so many different services and I feel like you lose a lot of that control. So I like that idea of just basically being able to say, "Look, I've got the compute. I can do whatever I need to do with it. It will scale to whatever I needed to scale to." And I think that's where this idea of IBM Cloud Code Engine comes in, which just became GA so I'd love it if you could tell the listeners exactly what that is.

Jason: Yeah, absolutely. So, so Code Engine is the new service that we launched that makes some of these concepts I've been talking about real. It is a service that allows developers to deploy functions, containers, source code, batch jobs, into IBM Cloud. The entire environment behind that application is managed for you. So we handle you don't manage clusters, you don't provision infrastructure. You can scale all the way to zero. So you can literally only pay for what you're using. You can scale up to thousands of cores that are in parallel processing your application and we manage that entire runtime environment for you. So you can think of it as a multi-tenant shared Kubernetes-based runtime environment that you can run your workloads on that presents to you the personality that you need for different workloads. And because it's all in one service, if you have an application that's like a mix of some single containers and batch jobs, they can actually talk to each other, they can talk to each other over a private network connection. They can work together instead of being kind of siloed in these completely different environments.

Jeremy: Right? Yeah. And so from the developer, I guess, perspective, you had mentioned that you can deploy just code or you could deploy a container if you want to. So what does that developer experience look like? So is this something where I could just say, "Look, I don't need to have a whole ops team now managing this for me. If I just want to write code, deploy it into these things, I'm sure there's some things I need to know," but for the most part, what does that developer experience look like?

Jason: Yeah. So you absolutely could do it without a whole ops team. The experience right now, there's like maybe kind of three basic entry points. You can give me source code and we will take care of compiling that source code, combining with a runtime, executing it for you, giving it a web end point, scaling it. You can give me some hints about kind of how much resource you think you need and things like that and we can scale that up and down and manage it for you, including all the way down to zero. That's nice if you're coming from maybe a historical paths background or it's just like, "Here's my code, run it for me." You can have that experience with Code Engine. You could also start with a container image. So lots of developers now, because of things like Kubernetes and Docker, are very familiar and comfortable with packaging up their application as a container image, but you don't want to then deal with creating a cluster and dealing with Kubes.

So you can just say like, "Here's my image, run it for me." And one of the advantages we have with Code Engine is we can really do that with any container image. You don't have to have a container image that follows some particular framework that's built in a very special way. We can take any container image and you can just literally point me at the image and say, "Run this for me," and Code Engine will execute it and scale it and manage it for you. Or you can start with a batch job interface. So like a more of an async kind of parallel job submission model. So maybe I'm doing Monte Carlo simulations or data processing and I want to parallelize that across a whole bunch of machines and cores, Code Engine gives you an interface for that. So as a developer, you kind of start with one of those three entry points and let Code Engine take care of how to run that and scale it and keep it highly available and things like that.

Jeremy: Right. So I love the idea of the batch jobs. I want to talk about that a little bit more, but let's go back to some of the use cases here. So what if I was building just like a REST API, that seems to be a very popular, serverless use case, what would I do for that? Do I need to have some sort of an API type gateway type thing in front of it? Or how does that work?

Jason: No, Code Engine provides all that for you. So you would literally either just take your implementation and package it in a container or point us at your source code directory. If you have source code, we use things like Paketo Buildpacks to build a runtime around that source code. And so you can use different languages. So you can either point us, with our CLI tool, you point us at the source code directory and we'll build it and package it in a runtime and run it for you. Or you point us out a container image that you've uploaded to our container registry or to your container registry of choice and then Code Engine will execute that for you. It will give you that web end point, right? So it'll give you a HTTP end point that you can use to access that service. And it will watch the demand on that system and scale it up and down as needed. And by default, we'll just scale it to zero. So it'll just be kind of registered in the system and it'll take care of scaling it up as needed to handle the demand on the app.

Jeremy: All right. Cool. And then what about these batch jobs? So I talked a little bit about this with Michael and this idea of being able to run massively parallel execution. So how does that all work?

Jason: Yeah. So similar, obviously with batch, there's a little bit more kind of metadata that you have to provide to describe the job and what you want to execute and how things relate to each other. So there's some input data you provide along with the implementation of the batch job, which itself could just be like a container image and you submit that job. So the CLI interface is a little bit different. You're not standing up a long-running REST end point, you're submitting a job to Code Engine for execution, and it will go take that job and execute it and parallelize it for you. You can also use Frameworks on top. One of the things we've been doing a lot of work on, maybe Michael talked about it a little bit when he was here, is some work we're doing around Ray. Ray is a really interesting new project that lets you do kind of distributed computing, especially around data workloads in a really easy way.

And so you can actually stand up Ray on top of Code Engine and so Ray acts as kind of the application interface for the developer to be able to easily parallelize their code, particularly Python code, and then Code Engine acts as the runtime below it. And you can take a simple function in Python, mark it as Ray remote and it'll now execute on the cloud and distribute itself across a thousand cores. And you get your answer back 20 times faster than you would have running it locally. And so you can have those kinds of async environments as well.

Jeremy: Awesome. And so what about some customers? So do you have customers that are having success with this now?

Jason: Yeah, we have a number. I mean, we have the European Microbiology Laboratory, which is using it to do science processing and provide access for scientists to the large-scale compute environments of the cloud. We have some airlines that are leveraging this. The airline scenarios, I think, the scenario is actually kind of interesting because it shows the power of combining REST end points, more interactive workloads with batch workloads. In their case, they're exploring using it to do dynamic pricing. So if you think about how you do dynamic pricing, there's kind of two dimensions. It's like, there's a very interactive, somebody is getting a price on a ticket or a route, and you want to be able to present them with dynamic price information as part of that web interaction. But then there's like a data processing angle.

You're looking at all kinds of data coming from your backend systems from route data, from the fleet and historical information. And you're trying to decide what the right price table is for that route. And so you're doing batch processing in the background, and then you're doing this interactive processing. You can implement both halves on serverless with Code Engine and they scale as needed. If you're getting a lot of traffic on the web front end, it scales up as needed without you having to do anything. So they can kind of combine both halves in one environment.

Jeremy: Right. Right. And so in terms of, I think we kind of talked about this a little bit, but when you see all these different services, right, and no matter what it is, whether it's Google's Kubernetes engine that they run or it's EKS on AWS or something like that, I think a lot of people look at these and like, "Oh, it's just another managed Kubernetes cluster." Right? So what are the major differences? I know we talked about it a little bit, but maybe you could just be a little bit more succinct and sort of talk about why is it so different than other sort of previous generations of tools or some of the other competing products out there.

Jason: Yeah. So if you look kind of behind the curtain on Code Engine, you'd see a couple of things. One is there is Kubernetes there, there is a Kubernetes environment there. The differences that Kubernetes environment is completely managed by the Code Engine service. So we're not, if you look at, in IBM Cloud, we have the IBM Cloud Kubernetes service and our Red Hat OpenShift service. So in those services, we're managing a cluster on your behalf, but we give you the cluster. It's like, "Here's your Kube cluster. We'll manage its life cycle, but you have direct access to it." With Code Engine, we have Kube cluster there, we completely manage it in all respects. You have no kind of direct access to it. That allows us to manage scale and capacity. We run that in a multi-tenant way. I mean, we have security and isolation between tenants, but logically you can think of it as like a big Kube cluster that lots of users are sharing, which is how the pay as you go model ultimately works because we're keeping track of what you're actually running and just charging you for that.

So one part of it is fully managing that runtime environment. We've layered on top of that things like Knative so that we have that developer abstraction like a simpler way to define services, to do the source code and image stuff that I talked about. That's coming through largely through things like Knative, which again, we're completely running for you, but it gives you some of that simple interface now that we talked about, and we're doing that in an open-source way with the community. So it's not like proprietary to IBM Cloud. And then on top of that, we built kind of the batch processing system. So batch scheduling and some of these unique interfaces, the command line interface and the user experience to get into that environment for the different workflows that I talked about. And one of the cool things is, because we built it on top of that Kubernetes layer, we can also expose the Kubernetes API if we want.

So like the Ray example I gave you, Ray doesn't really know anything about Code Engine, but Ray knows how to deploy and leverage a Kube cluster. So we're able to actually hand Ray the Kubernetes API server end point inside of Code Engine for your instance. And that framework can use Kubernetes to stand itself up. And then you can use the kind of simple abstractions on top, and that's still all in Code Engine. It's still pay as you go and it still scales to zero. And so that's what I meant by this you can kind of blend the lines and drop down to or the framework can drop down to something like Kubernetes as needed to give you that flexibility.

Jeremy: Yeah, that's awesome. So you mentioned you have a fully managed Kubernetes service and then you also have a bunch of other serverless services that run within the IBM Cloud. So OpenWhisk or, I guess, IBM Cloud functions now. And then also, I mean, you mentioned Cloud Foundry, which is sort of a pass, but it also sort of an easy-to-use serverless environment in a sense. Right? And so I guess, is this like an evolution? Is this where you suggest people go?

Jason: Yeah. Yeah. So I think the simplest way to think about it is yes, Code Engine is the evolution of those ideas. It doesn't necessarily have a direct technical lineage, always, between those projects, but the problem that functions with IBM Cloud functions that Whisk was trying to solve and the problem that Cloud Foundry was trying to solve with source code, start from source code paths, are both represented in what we're doing in Code Engine. So Code Engine will be the kind of natural evolution path for those workloads and for the problems that those users are using those platforms for. The Cloud Foundry one, I think, is super interesting, in the sense that with the rise of Kubernetes has clearly pivoted many people who were doing Cloud Foundry into doing Kubernetes.

Jeremy: Yeah.

Jason: And people are using Kubernetes as their foundation and the Cloud Foundry project, which we're deeply involved in, has done a lot of work to kind of realign Cloud Foundry with Kubernetes in a better way. But what never went away, what people always still saw value in with Cloud Foundry was the simple push my source code developer experience. Right? And so that still carries forward. And with Code Engine, we're taking that same experience that we had in Cloud Foundry, and we're bringing it into this new service and bringing it onto Kubernetes seat, so the developer still gets that similar experience, but without the boundaries that we talked about. The challenge with Cloud Foundry was always like, oh, as soon as you want to do stateful things, or you want to do async jobs, Cloud Foundry didn't solve that problem. Go use a Kube cluster or go use some completely different environment. And so it's kind of the same experience with the boundaries removed and that's where we would see people go.

Jeremy: Right. So if I'm in one of those services, now, if I've got things written in Cloud Functions or in Cloud Foundry, and I've hit some of those limits, or I just want to take advantage of some of the cooler things that Code Engine does, is there a simple migration path for those?

Jason: Yeah. In general, yes. For Cloud Foundry, for sure. It's pretty straightforward to take the same source code directory that you have and just push it to Code Engine instead. Right? So I think the path for a Cloud Foundry, I mean, there's edge cases with everything obviously, but the base of workflow is the same. You can use the same source input directories. We mapped to Paketo Buildpacks, which Cloud Foundry, a lot of that stuff came out of Cloud Foundry. And so that has a really clean path. For Cloud Functions. There's a little bit of a timing thing in general, yeah, you can take your same functions. You can run them on Code Engine. OpenWhisk has some advantages still that we haven't quite gotten built into Code Engine yet. It's got faster startup times, for example, right? The runtime model behind Code Engine, we're still starting a container, like a full container.

In OpenWhisk we had done a bunch of work on warm start of containers and container pooling so we can get like small number of milliseconds startup times on those functions. And some of that hasn't worked its way into Code Engine yet. So there are still some cases with Cloud Functions where it has some capability that doesn't quite exist in Code Engine yet, but over time that will get filled in and there'll be a simple path there to move all those workloads over to Code Engine as well.

Jeremy: Right. So with Code Engine, because you mentioned this idea of sort of like the cold starts. So does Code Engine keep containers warm for a certain amount of time or is it always a cold start?

Jason: It is, in general, a cold start. It can keep some of them, like in the scale up scale down cycle, it may keep them around for a while, so it doesn't be overly aggressive about scaling them down and bringing them right back. But it's not doing some of the warm start tricks yet that OpenWhisk was doing where we have a pool of primed container instances, and then we're injecting code into them and running them. That's work-in-progress. There's work to do both in Knative to improve that stack and then stuff to do in Code Engine. There's a balancing act there too ...

Jeremy: Yeah, definitely.

Jason: ... on things like network isolation and getting on customer VPC networks and other things which are harder to do in that warm start model.

Jeremy: Yeah, definitely. All right. So if somebody wanted to get started with Code Engine, what's the best way for them to do that, just sign up and start writing some code or how do they do that?

Jason: Yeah, kind of. I mean, obviously, we've been talking a lot about how developers use these things. And so I always think the best way to get started is either to build something on it or to try out some specific source code project. We have a lot of things that we've done to try to make that easy. So there's a Code Engine landing page on IBM Cloud. It has some great examples to guide you through those three starting points I talked about, start from source code, start from image and do batch. We have some really nice tutorials, like specific text analysis tutorials, for example, that'll show you how to build applications on Code Engine. And we actually have a pretty cool Git repo, which will take you through tons of samples of how to use Code Engine to solve all kinds of problems.

So there's a lot of really good code assets out there that a developer could go to and actually try something real on Code Engine and the getting started experience is super easy. You've got IBM Cloud, you log in and you go to Code Engine, you create a project, you push an image and then a couple of minutes you'll have something up and running that you can play with.

Jeremy: Amazing. All right. So I love watching the evolution of things and again, just this different way that, that IBM is thinking about serverless and, again, trying to make it easier. Because I always look back and I think of Lambda when it first came out, I was like, "Oh, it's so easy. You just put some code there and it's just done for you." And then we got more and more complex and more and more complex. And not that we didn't need to, I mean, some of this complexity is absolutely necessary, but I'm just curious, seeing the evolution and where things have gone, I talked to a bunch of people earlier about, Roger Graba, for example, who was one of the first people involved with the IBM or the OpenWhisk project, I guess it was Apache OpenWhisk or it became Apache OpenWhisk, whatever what it was, seeing that evolution and seeing the changes that these different cloud providers have gone through, seeing the changes that IBM has gone through and where you sort of are now with Cloud Code Engine.

I'd love to get your perspective here on where you think this is going, not just maybe what the future is for IBM, but what you think the future of serverless is and just cloud computing maybe in general. I know that's a lot of question.

Jason: I'll give you a long answer.

Jeremy: Perfect.

Jason: So that brings to mind two things. First, let me talk about the complexity thing for a second. Managing complexity is always hard. You are so right. That many things start out with a value prop of like, this is easy. And then as people use, the more you add more, and then three years later, we're like, "We need a new thing that's easy because that other thing is too hard now." And there's no magic pill for that. That's always a hard problem to manage. However, one of the things I like about the approach that we're trying to take with Code Engine is because we've layered it on Kubernetes, It gives us a way to kind of decide where we want that complexity to show up. When we had a Cloud Functions OpenWhisk stack and we had a Cloud Foundry stack and you had a Kubernetes stack, you had to try to solve all problems within each stack.

So each stack was getting more complex because you were trying to like, "Oh, I need storage. And I need like private networking. And I need all these things." With Code Engine, I think we have an opportunity to say, once you cross some line, we're just going to ask you to drop down a layer and go use it directly in Kubernetes, right? You can push some of the complexity down and that allows us to hold a harder line on complexity in the developer layer on top. So it's the balancing act we're trying to play is because we built it on a common platform, we don't have to solve all problems in Code Engine directly.

Jeremy: Right.

Jason: So that's kind of my viewpoint on the complexity problem. On the evolution, it's really interesting. So one of the other things that my team's working on and launched recently is this thing called IBM Cloud Satellite, which is about distributing cloud outside of cloud data centers so you can kind of consume cloud services anywhere you want. So cloud computing in general, and this is not just an IBM thing, in the industry cloud computing is diversifying to be kind of omnipresent. You can consume cloud on-prem, at the edge, in our cloud data centers, wherever you want. There's a programming model dimension to that problem, too. As you specially go to the edge, you kind of want some of these simple to consume, easy to deploy, scale to zero, resource-efficient, you need some kind of model like that because at the edge, especially, you don't have 2000 cores worth of compute to go deal with.

You have one box in a retail store, or you have two servers in the back of the distribution center. And so I think things like Code Engine layered on top of distributed cloud and in our case, things like Satellite, is actually a really powerful combination. I think we're going to see serverless become the dominant application development and deployment model, especially for these edge use cases, because it combines ease of deployment and management with efficiency and scale to zero footprint, which are all really attractive when you get outside of a mega data center like you have in cloud.

Jeremy: Right. Right. So I love this idea, too, about sort of expose the complexity when the complexity needs to be exposed. I love this idea of sort of creating same defaults, right? If you could default Kubernetes to do all the optimal things that you would need it to do for use case X, if you could just do that for me and then if I say, "Oh, I want to tweak this one thing," then be able to kind of go down to that level. But I love this idea of you mentioned about edge too because that's one of those things that I think, from a programming model, as you said, how do you write code that's sort of, I guess, environment-aware? How does it know what's running at the edge versus running in a data center versus running maybe in a hybrid cloud and partially in your own private cloud or your own private data center? That model, just wrapping your head around it from a developer standpoint, I think is incredibly complex right there.

Jason: Yeah. It is. And sometimes it's like, how do they know? And then sometimes it's like, how do I just operate at a high enough level of abstraction that like the differences between those environments can get handled below me? If I'm consuming Kubernetes clusters directly, the shape of that Kubernetes cluster in like a retail store or a telco data center in Atlanta somewhere or in the cloud are going to all be different because you have a different amount of capacity. You have a different networking arm. So you're going to have to deal with the differences. If I'm giving you a container image and saying, "Run this," the developer doesn't have to deal with those differences. The provider might have to deal with those differences but the developer doesn't have to deal with those differences. So that's where I think things like serverless and approaches like Code Engine really come to be much more valuable because you're just dealing at this higher level of abstraction and then Satellite and Code Engine and other services can kind of magically deal with the complexity for you.

Jeremy: Yeah. And so I know we talked a lot about Kubernetes and what's running underneath a lot of these services. Is that something you see, though, as being that sort of common format across all these different services, or do you think that something will evolve beyond Kubernetes to become a standard?

Jason: Right now, I really think that Kubernetes will become the base platform. What Kubernetes is will probably keep evolving. And I'm not saying it's Kubernetes forever, but I don't think we should underestimate the power of the kind of industry-wide alignment that exists around containerization and Kubernetes as the next infrastructure platform, if you will, because that's kind of really what it is. And I told you at the beginning, I used to build webs for apps servers. So I was like very involved in the whole Java app server era, the late 90s and early 2000s. And at that time, the industry kind of aligned around two platforms, Java and .net, as the two dominant, at least enterprise, application platforms. We have everyone aligned on Kube. Literally, there's nobody in the industry who's not like, "Kubernetes is the platform." So I think it will be the abstraction for infrastructure in all these environments. The question will be, how do you consume it? Who manages it? How's it delivered? How does it optimize itself? And then at what level do you consume?

And I don't think Code Engine is the end of it at all. I think there's lots of room for improving the consumption experience on top of Kubernetes for these developer use cases.

Jeremy: Yeah. Yeah. And that's actually was going to be my next question, sort of where do you see, what's the next evolution of Code Engine, right? So is that going to be kind of driving into specific use cases more and trying to solve those or becoming more flexible? How do you see the developers, I don't know, in five years, maybe this probably a hard question, but in five years, how are we going to be writing cloud applications?

Jason: Yeah. It's a great and super hard question, but I think projects like Ray, I think, are an interesting forward look into where this might go. One of the things that I've always felt like, if I look at the whole history of paths in particular over the last five, six, seven years, paths has always been about simplifying the experience for the developers, but fundamentally, most paths environments don't change anything about how you write the code. They change how you package the code, how you deploy the code, how the code is executed, and how the dependencies of the code are satisfied. But the actual code you write probably wasn't any different. Right? And that's where I think there's the next step is like, how do we actually get into the languages, into the code structure itself to be able to take advantage of cloud capacity, to be able to take advantage of scale and there's lots of projects that have taken attempts at that.

Ray, as an example, I think is a particularly interesting one, because there's some good examples where you can take a Python function, you literally add like one annotation to it in the language, and now it becomes remotely executable and horizontally scalable for you.

Jeremy: Right.

Jason: It's that kind of stuff that I think three or four years from now, there'll be a lot more of, where we're actually changing how code is written because that code can assume there's some containerized, scalable fabric out there somewhere that it can go execute on top of.

Jeremy: Right. Yeah. And I think that that pendulum swing for developers, especially, well, developers in the cloud, who's they used to be writing a bunch of code, whether it was JavaScript or Python or Java, whatever it was and then all of a sudden now they have to switch context and be like, "All right, now I have to write a YAML file in order to configure my cloud resources," and that sort of back and forth. So yeah, that marrying of basically saying like a programming language for the cloud is a really interesting concept.

Jason: And I think the distributed cloud notion, funnily enough, is a big enabler of that. Because, I don't know, the other tension I see right now is like, let's say you wanted to use Lambda or you want to use serverless functions. That only works in your cloud environment, but you're also running something at the edge or you're running something in your data center, so you're forced to kind of use different approaches, which tends to force you to kind of some common denominator models.

Jeremy: Right. Right.

Jason: And so you're kind of holding back from really adopting some of these newer models because of the diversity. Well, if cloud goes everywhere and those services go everywhere, then now I can just say, "Well, I'll use the serverless model everywhere. And so I can really deeply adopt it." So I think the distributed cloud thing will open up the opportunity to embed these approaches more deeply in kind of day-to-day development activities.

Jeremy: Yeah. No, I love that. I'm all for that approach because I think this split-brain sort of approach to it is getting very complex and it's not super easy. So is there anything else that you'd like to let the listeners know about IBM Cloud Code Engine?

Jason: No. I mean, I think we touched on a lot of the motivation behind it and the kind of core capabilities. I would just encourage you to go check it out, go check out the space, go give it a try and love to hear people's feedback as they do that.

Jeremy: Awesome. Well, first of all, I got to make sure I thank IBM Cloud for sponsoring this episode because just the team over there and everything that all of you are working on is amazing stuff and I appreciate the support. We appreciate the support in the community for what you're doing. So if people want to find out more about you or more about Cloud Code Engine, how do they do that?

Jason: Yeah. And you can find me on Twitter, JRMcGee, or LinkedIn. For me personally, I love to talk to people. For Code Engine, I think the best place to start is the product page, which is ibm.com/cloud/code-engine. And from there, you can get to all of the code examples I talked about.

Jeremy: Awesome. All right. Well, I will put all that stuff in the show notes. Thanks again, Jason.

Jason: Yeah. Great. Thanks, Jeremy.


Episode source