ALTERNATE UNIVERSE DEV

Serverless Chats

Episode #46: Serverless Use Cases with Gareth McCumskey (Part 2)

About Gareth McCumskey:

Gareth McCumskey is a web developer with over 15 years of experience working in different environments and with many different technologies including internal tools development, consumer focused web applications, high volume RESTful API's and integration platforms to communicate with many 10's of differing API's from SOAP web services to email-as-an-api pseudo-web services. Gareth is currently a Solutions Architect at Serverless Inc, where he helps serverless customers planning on building solutions using the Serverless framework as well as Developer advocacy for new developers discovering serverless application development.


Watch this episode on YouTube: https://youtu.be/5NXi-6SmZsU

Transcript:

Jeremy: One of the things that I know I've seen quite a bit of is people using just the power of Lambda compute, to do things right? And what's really cool about Lambda is Lambda has a single concurrency model, meaning that every time a Lambda function spins up, it will only handle a request from one user. If that request ends, it reuses warm containers and things like that. But, if you have a thousand concurrent users, it spins up a thousand concurrent containers. But you can use that not just to process requests from let's say frontend WebSocket or something like that. You can use that to actually run just parallel processing or parallel compute.

Gareth: Yeah. This is one of what do they call it, the Lambda supercomputer.

Jeremy: Right.

Gareth: You can get an enormous amount of parallel... Try to say that three times quickly. Parallelization with Lambda. I mean, like I said, by default you get a 1000 Lambda functions that you can spin up simultaneously. And if you ask nicely... Well, you don't even have to ask nicely, just ask them and AWS will increase that to 10,000 simultaneous. And it's really impressive how much compute you can do, to the point where, at one point I was working with a company looking to try to do some load testing of an application.

They had an instance where, on Black Friday, the tech kept falling over. They wanted to try to get some load testing in beforehand to make sure that it can handle at least a certain amount of volume. Because you can never entirely predict what your traffic patterns will look like. But at least let's try something. And they spend a lot of time looking at commercial solutions out there because there are a few of them out there that try to help with that.

And they normally try to do about 500 to maybe a 1000 simultaneous users or simulated users, which is impressive but not quite good enough when you're an organization that's going to be having 10,000 to 20,000 simultaneous users on your site at a time. That gets a bit rough. So the move was then to try and build some load testing application ourselves. And this was initially tricky to do because we were trying to do this using the traditional VMs, virtual machines, and containers in some way, try to get EC2 instances up and running to try and run multiple simultaneous users at a time, in a single VM, using essentially a combination of these end to end testing tools where you can simulate a user flow from loading the homepage to going to a product page, adding to cart, going to checkout. Doing all of this on a staging environment so that you could simulate the whole user for all the way to purchase, the sort of main line to purchase as it were, make sure that you could get a few thousand users all the way to there without issue.

And what ended up happening was these virtual machines just couldn't cope with the load of all these simultaneous users running on a single machine even with inordinate amounts of CPU and RAM on them. So the idea came to us to try and do this with Lambda instead. So what ends up happening is, because you have a thousand simultaneous Lambda functions, AWS also architects this in a way that the noisy neighbor effect of all of these Lambda functions is almost nothing. You can't say nothing.

There has been some research I've read that shows there is a bit of a noisy neighbor effect between Lambda functions. But one interesting thing that we found was this is reduced when you increase the size of your Lambda functions to the maximum memory size, which is pretty cool. Because then uses an entire machine essentially or virtual machine as it were. So now you're limiting the effect of that noisy neighbor effect happening. Which means you can then also run 10 to 20 simultaneous users on that single member function with that enormous amount of size.

And if you have a thousand of those, well now you've got a thousand Lambda functions with 10 to 20 users per Lambda function, running an end to end test, pointed at a single staging environment. That's a pretty powerful bit of load testing you can perform there. And Lambda being as flexible as it is, we needed to import a binary to execute the end to end testing framework that we were using.

So you can use Lambda asynchronously to help you spin up the required binaries, import all of these items in and then synchronize the start of the tests through SNS, for example, which can just fan out the go command to all of these Lambda functions waiting to execute. And that was it. We have 15 to 20,000 users, load testing and application. And that's going to tell you whether you're ready for Black Friday or not.

Jeremy: Right. Yeah, no, I think it's an awesome use case. And I mean the parallel load testing, I mean, just the amount that you can get. I mean even you try to run something on your local machine and you're trying to do just to simulate some things. You only have so many threads that you can open up, so many users you can simulate. And to do this reliably, I mean, some of these testing sites, you can go and get some of these... Use some different sites to do it.

But they get pretty expensive if you want to do regular tests. If you run a thousand concurrent Lambda functions, even at the maximum memory, and it takes maybe five minutes to run your full load test, you're talking about a couple of dollars every time that runs. Right? I mean, so the expense there is amazingly low. I think that's a super useful use case. There are some more specialty things though that you can do with Lambda.

And Lambda is very, very good at, or I should say Lambda is built to receive triggers, right? Serverless is an event driven system. So there are all kinds of triggers for Lambda functions. And I'm sure we probably could talk about a thousand different ways that you could trigger a Lambda function and do something. Everything from connecting to an SQS Queue to like you said, the DynamoDB streams and things like that.

But one of the interesting triggers for Lambda functions, is actually getting an email that is received from SES, which is the Simple Email Service that AWS has. and I find this actually to be really interesting. You did something interesting with that.

Gareth: Yeah. We worked with an organization who essentially they handled requests for medical insurance. So other companies would send this organization an email with information about users who needed medical insurance. And these emails are usually pretty similarly formatted. They were structured almost exactly the same, just with user information that was slightly different every time. And it was getting very tedious for them to have to constantly go into this inbox, troll through all these emails and then manually insert them into a CRM system, so that the sales team could later get back to these folks and help them sort out their medical insurance.

So one of the things that they initially did before we came along, was they had a virtual machine essentially running that was a regular old email inbox and a script that ran every five minutes on cron that would then log into this inbox pull all these emails out, and try to parse them and then insert them into their CRM binaries API. Anybody who's done that kind of thing would realize there's quite a few flaws in that potential process, because not only do you potentially have thousands of emails you've got to pull in in five minutes before the next cron runs, but how do you keep track of which email you've already read.

And the issue there is as well, this inbox was used by humans as well. So you couldn't use the red flag on the inbox as well, because a human might've clicked on an email and then you completely miss this lead in the first place. So it was kind of a problem to solve. So ultimately, the solution ended up being, registering a new sub domain on their email domain. And then just informing the partners that the email address to send these leads to had changed. And this email address was actually created inside of SES, Simple Email Service, which has a way for you to create a way to receive emails.

You can create inboxes in SES to receive mail. And then you have a process you can set up and how to manage these emails. So, various methods you can do with these emails, the one that we ended up choosing was taking the email and storing it as an item in an S3 bucket. And this is where these triggers then happen. Anybody who's looked at serverless has seen the Hello World equivalent of serverless, where you can use S3 buckets to create thumbnails of images. But you can trigger anything in S3.

So if you drop an email into an S3 bucket, that can trigger a Lambda function. So what's useful here is that we have a system that's receiving an email, puts those into an S3 bucket and that specific object put, spins up a Lambda function with all the detail of what this item is. The Lambda function that's triggered can read that email straight out of the S3 bucket and then process it just like it was doing before. It can pass through this email, get the user's contact information and then put that into that CRM that they need in order to get in touch with folks.

And again, there's no worry here about have we read this email before. This isn't a human readable inbox. This is only used through SES. So there's none of that concern. And again, this is all entirely serverless. SES is going to receive your email at what pretty much whatever quantity you need, and they were receiving a few thousand emails a minute. So it became quite a big deal. And S3 as well has enormous scale that you can just use. You can just insert all of these items. Lambda can just scale out and process all of these items individually, pretty handily.

What actually ended up being the problem was that their downstream CRM couldn't handle the load at one point, so they had to have an upgrade. But that's a different story.

Jeremy: Well that's a common problem I think with serverless is that it handles so much scale that the downstream systems have a problem. So that use case though, this idea of receiving emails, dumping them in S3, reading them in with Lambda, there are just so many possible use cases around that. So the medical thing I think is interesting, parsing it, trying to get it into a CRM.

But if you wanted to build your own ticketing system, like a support ticketing system. Now again, I wouldn't suggest you do that unless you're like building a SaaS company that's going to do it. But if you're building a SaaS company that has a ticketing system component, this use case is perfect for it. I mean, it's great. And then I actually saw quite a while ago, somebody built an S3 email system, like an entire email server using just S3 Lambda and SES.

So essentially when the message came in, it gets processed by Lambda function, the Lambda function would read it. It was just sort of a catch-all address, Lambda function would read who the "to" was from and put it in the right box for it. It's amazing. So I think that's a really, really cool use case. I think you could handle attachments and all kinds of things like that that you could do with that, run algorithms on them, send them into SageMaker and do machine learning. I mean there's all kinds of things that you could do that would be really, really cool around that.

Gareth: There's also the idea... I mean, the ideas that have crossed my mind and you think about all these triggers that you could potentially do, and all the services available in AWS to do them with. And I even picture an idea where you could talk about a ticketing system and combine this with a CRM type system where you're receiving a constant communication with customers. And you can send this through, I forgot the name of the AWS service that does sentiment analysis on text.

Jeremy: Yes. Yup.

Gareth: So you can just use that to do sentiment analysis. And you can have managers in a customer services team get notified when there's a certain proportion of customers that have a sudden mess of a negative sentiment. And you can start investigating before, even though there's a problem you've picked up from customers that there is a problem to go and solve. You could even do this with voice, because of AWS's sort of call center service. You can pass that through a sentiment analysis machine.

And again, all of this stuff is built in serverlessly. You can trigger all of these things just as they happen automatically event based, really.

Jeremy: I hope this has got people thinking, because we clearly are not going to have enough time to cover all of these use cases. But there's a few more though that I'd like to talk about, because I think these are broad that you could use for... And you can think of your specific use case for them. And one of those is cron jobs, right? Like cron jobs are...They're the Swiss army knife for developers. Like we use them for everything.

We use them when we're like, "Hey, these log files keep filling up on this server. Let's run a cronjob and clean them up every couple of days or whatever it is." We use them to trigger ETL tasks. We use them to trigger all kinds of different things. And that is a really, really good use case for serverless, especially if you want to run something sort of peripheral to your main application.

Gareth: Yeah. And cron jobs funny enough is probably the second most common use case I think we've seen with serverless applications, because I think every developer has been in that situation where you've got your main stack of stuff, sitting there doing your web stuff that you need. And you suddenly realize you don't really have any way to run schedule tasks. You spin up a little T2 small, EC2 instance somewhere to run some basic cron jobs, that might just call it an API end point at some point.

But you need that capability to schedule things on a permanent, hourly, daily whatever basis to run those things. And that's where services like Lambda for example, become incredibly useful, because you can just schedule a cron job or a schedule onto a Lambda function, and then have it access all of the AWS services that you'd normally access. And a lot of the times the cron job's fed everything from sending regular email, because often you'd have a management that wants a status update sent for certain metrics. So you build a cron job for that. A lot of the times before you realize the wonder of SQS as a queuing system, you might build your own little queue system in a database table and you use a cronjob every few minutes to run over there. That kind of thing happens. So again, Lambda functions become really useful for managing all of these sort of scheduled cron jobs that you need to execute.

Jeremy: And combining cron jobs with other things. Like let's say every hour or something like that, you want to trigger something that anybody who's on your website, something gets pushed to them or whatever. If you've got WebSockets set up right, like can just run a cron job and you do that every hour. The ETL tasks I think are an excellent use case for the cron job things. And then you actually did something with some XML feeds, right?

Gareth: Google Shopping feed is one of these things that Google provides for you to advertise your products. Again, this is part of the e-commerce platform that I was working with. Google Shopping has the ability to read an XML feed of your products, but this feed needs to be built. And one easy way to do that is because the details of your products don't change all that often. I mean a shirt is a shirt, and a pair of shoes is a pair of shoes. You can build this feed ahead of time.

So cron jobs is a great way of pre-rendering this XML feed so that when the Google Shopping spider comes along to read the XML feed, it's always available for you. And in this particular case the organization was using Magenta as their e-commerce backend. So instead of building the features on top of the existing stack, we were able to build a serverless sort of side project to handle this, so that we didn't have to make these changes to the existing stack and potentially cause issues there.

And Google could just come at any time and constantly read this shopping XML feed or because of a XML data built ahead of time with a cronjob.

Jeremy: Yeah, and I love that too, where you use Lambda to do the compute, and it doesn't touch the rest of your stack. I mean like if you're generating reports or something like that every night, do you really want to be using CPU power that runs alongside your application? That's handling requests from your users to generate what could be very CPU intensive. And that actually leads me to this next use case, which is this idea of sort of this offline or async processing. And you had mentioned, asynchronous in the past.

Like that's the idea of, API Gateway sends a request to SQS, SQS which is a Simple Queue Service grabs the message, replies back to the API, "Hey, I got it." Which replies back to the user and says, "Okay, we've captured it." But then you've got something else down the road, that might require more processing power than you would want to do synchronously, right? Like, you don't want to generate a big PDF or convert a bunch of thumbnail images.

You don't want to do that in real time, while the user is waiting on the other end of an API. You want that to happen in the background. Right? So as a background task. So this idea of offline or async processing, like what are some of the other sort of things you can do with that?

Gareth: Well, you've mentioned a few of the use cases already, and one that I've ended up working on was a project where users were able to upload images into the application. And one of the things that had to be done to these images was essentially a uniqueness hash calculation on them. And this essentially scans through the pixels of the image, and then calculates sort of a string-based hash that you can very quickly determine if you have another image in library that has a certain similarity level.

So you can also tweak how similar you want all these images to be. But this is a pretty intensive process and can take 10 to 20 seconds in some cases depending on the size of the image. And you don't want this kind of thing happening synchronously on upload. So use it as an upload the image and sit around waiting for 10 to 20 seconds, until this hash is calculated. So what's here is that you have for example, an S3 bucket and we kept talking about S3, but it's the workhorse of AWS. It does so many things so well.

But again, you can trigger asynchronous offline style processing by dropping this image, for example, into this S3 bucket. And then either through a cron job as you mentioned before, or just triggering off of that put objects action that gets generated by S3 bucket to a Lambda function. You can then trigger these calculations. And this can run the gamut. It's not just this hash calculation I'm talking about, but you mentioned PDF generation. So you can have dynamically generated PDFs made available to the public when things like a DynamoDB table is edited.

Now in the background, it receives that event trigger to a DynamoDB stream that the data has changed and it starts rebuilding PDFs. Maybe there's multiple of them. And you can combine this with the power of something like SNS or EventBridge that you can trigger multiple Lambda functions, each rebuilding a specific PDF because of one data change that you made. Very powerful ways of doing these things.

One of the other useful ones that I've used in the past is we were talking about the whole Jamstack style process before. But a lot of cases, if you look at a lot of web frontends, there's many pages that often never change in their content, or very rarely change in their content. And in those rare circumstances where somebody has come to edit the contents in a CMS style. You can, instead of having a WordPress style CMS that you click a save button and the content is instantly changed.

You can instead save that in some kind of headless CMS system, for example, that can then trigger off a Lambda function to pull in this new content and rebuild the static HTML, JavaScript and CSS that page consists of and push that into an S3 buckets. And this is a CMS, a type of system that we built in the past with asynchronous processing, because you don't really need that page to be updated, the instance somebody hits a save button.

But you do need it to be updated within a reasonable amount of few... Maybe a few seconds is more than enough. And then you have the entire power of a Jamstack that can manage this enormous amount of load, loading static content, but still have that asynchronous process and to make pages dynamic as well. Pretty useful.

Jeremy: Yeah. Love that. Love that. So the other one that I really like, you mentioned that image hash. In order to figure out the differences between images. Actually did one of those again several years ago. But I know really has nothing to do with serverless, but it is a really, really cool little algorithm that you can write that essentially you reduce the quality of the image to 10 x 10 or something like that, take an average of the pixels and then you can use each pixel to determine... Make it black and white and so forth or gray scale. That's a very, very cool... It's a very cool algorithm that you can write. So definitely check that out if you're interested in running image hashes.

Gareth: The other interesting thing is if you combine these kinds of scenarios with something like CloudFront and Lambda@Edge this isn't necessarily asynchronous processing, but this is a way to... AWS actually has an example architecture where they combined Lambda and CloudFront to do thermal generation on demand. Which is a very interesting pattern to also take a look at where you have a base image. It might be your monstrous 4K image that is 20 megabytes in size. You don't want to serve that in a frontend, but you want this to be your source for all of the other images.

And you could use you can use a CloudFront and Lambda@Edge to receive a request for this image. And with Lambda@Edge you can intercept that request for the image. And often this is done with a unique URL. So you can have a URL that says something like thumbnail/ and a specific size, written somewhere in that path. Extract that information out of the path. Realize that the image that this URL references is this enormous one sitting in your S3 bucket.

And pull that out of the S3 bucket, resize that to the correct size you want, so that it's much smaller and return that to the user, and immediately CloudFront is going to take this much smaller image and cache that. So the moment that the next request... Because the first request might take a second or two for that whole process to happen. But the instance you do that the first time it's now cached in CloudFront and the next request that comes in for that size dimension, it's already done but you haven't consumed any extra space in S3.

It's all just an item sitting in CloudFront. So you could even clear your CloudFront cache and reset all those images. But again, you haven't incurred the cost of additional items sitting in your S3 bucket that you need to worry about managing last cycle. This is all just managed in CloudFront for you.

Jeremy: Yeah, that's great. We didn't really talk at all about edge use cases. But obviously there's the ability to do things like, I shouldn't say obviously, because this might not be obvious to people, but you can run a Lambda function at the edge. So you can do AB testing, you can do redirects, you can do blocking, you can do blacklisting, you can do... There's a million use cases around just the edge itself.

But anyways, so we're running out of time here and I do want to get to one last one, which is the, I guess the proverbial thorn in serverless's side, if that's the right way to say it. And that is machine learning because anytime you say, "Well, serverless can do pretty much everything." Everybody's like, "No, no, I can't do machine learning." Which is true to some extent. So there are some use cases where machine learning does work. And then there are some that they don't, but I don't know, maybe you have some more insight into this.

Gareth: Well, there's a few angles you can actually take on that because it all depends. I think one of the biggest Achilles heels of Lambda when it comes to machine learning is really Lambda's disk space that's available to you. Because a lot of the times with Lambda you need to import additional libraries in order to run machine learning models. You also need to import your models, which can often be enormous amounts of megabytes inside, yeah.

And that means you've got some limited space to work with there. And if you can actually fit those libraries in those models onto a Lambda function that turn out 250 megs of limited space well then you could probably run that in parallel, like I mentioned the Lambda super-computing, you can run those in parallel and potentially get a lot of work out of Lambda functions. But serverless as an architectural concept isn't necessarily just about Lambda functions either.

I mean the whole point of serverless is to look at the managed services that are available to you so you don't have to rebuild everything from scratch, remove that undifferentiated heavy lifting. So again, there's a couple of angles on this because if you want to build an image recognition model yourself, well, maybe reconsider that. Because there are image recognition models out there that you can use.

If you're doing text to speech, well, there are text to speech engines already available in AWS, and they might be good enough to do what you need to do. Of course, if you're trying to build your own product that is a text to speech product, well, okay, I get it, then you might want to build it yourself. But if you find that the model you're doing isn't quite provided by these services. There's one additional service that you can use in a serverless context.

And that's Fargate, which is pretty cool service to look at. It's different to Lambda in that it isn't quite as responsive. So if you're looking for something that's low latency and can really get things done really quickly, Fargate might not be the tool for you, but if you're doing ML models, that's probably not your concern. And Fargate's for anybody who isn't aware... Fargate is a service that lets you run Docker containers without worrying about any of the underlying orchestration and management of them.

You essentially say, I need a Docker container. This is the image, this is the parameters of what I need to execute and Fargate will spin up that infrastructure for you in the background. I don't know how AWS does it. But again, that's the beauty of it. I don't need to. They manage all of that for you and you allocate the disk spaces as well. When you're building these images, you set up the disk space you need, you import the libraries you need, the models you need, and it'll just execute in AWS's backend. So that's a great way to run your own models. And another angle is SageMaker. So there's many ways to take this where AWS provides a service that lets you run them on models. SageMaker is a way...

Jeremy: They have the models already built for you in most cases too.

Gareth: Yeah. So you can just import your models and run them. And there's an entire set of infrastructure back there to let you run your machine learning models anywhere that you like.

Jeremy: Yeah. I totally agree too. I mean, there's just so many options for doing that. And like you said, there's a ton of these media services that they have. They have Lex, they have you know, Rekognition with a K that allows you to do image recognition and some of those things. The sentiment analysis, I think it's Comprehend, right? And we've talked about that a little bit earlier. That's machine learning.

That stuff is just there for you and it's just an API call away right from your Lambda function or whatever you're doing. And so unless you have some really unique machine learning model that you need to build. There are still options to do that in a fairly serverless way or close to serverless way, just maybe not on Lambda functions. But anyway, so listen, this has been a great conversation.

I think hopefully people have learned a lot from it, but before I let you go, I do want to ask you. I mean, you do work with some customers at Serverless Inc. You sort of help them figure out how they want to move to serverless and what they're building. So what are you seeing as sort of those first steps that companies are taking as they're starting to migrate or think about serverless?

Gareth: Yeah, it's interesting. One of the downsides I think of serverless is that, it is such a new way to build things that it initially seems a little daunting. So a lot of organizations, especially the older ones have come from the idea of having your own servers on premise, and now this new cloud thing has happened. So we need to move to the cloud and they can essentially take what they have on premise just lift it and shift it into the cloud. And things are good. Things are familiar, there are some slight tweaks here and there, but pretty much it's what we know.

But just running in their person's data center and the lift and shift seems to work. Serverless mostly that isn't a lift and shift type operation. But there is some limited ways to do some lift and shifting. So an example of this is, if you're already running an Express backend for example, you can pretty much take your existing Express backend and fit it into Lambda. And we have a lot of customers who do do that. And if anybody who has used all of the available services in serverless and built their Lambda functions from scratch, this might seem like an odd way to do things, but it's actually a really nice way to quickly get into serverless.

And see some of those benefits that you get with the automatic load balancing and the disaster recovery and the less maintenance and so on, lets you quickly get into that. And we see this across the board. There's even a project now, it's a project out there called Bref for example. If you're building PHP applications where you can just run your Laravel or Symfony application on Lambda functions for example.

So we see that a lot as the initial use case, where folks want to take what they have existing and lift and shift it into serverless. And then ideally what we find is that they understand that there's limitations to this, because you're just taking what you already know and putting into something brand spanking new, and then you start realizing there's a lot of benefits to serverless that you're not really getting by doing that.

So things like making full use of all of these services that are available to you in AWS because you can't necessarily get your Express backend to get triggered by an S3 bucket, for example. And Lambda function does that and it doesn't really speak to Express really cleanly. That's when we start helping organizations with those POCs where they're building a sort of cloud-native, serverless-first style application or just one small element of their application as a serverless-first item. And this runs the gamut.

We have a conference coming up that's going to have thousands of attendees. We want to build a mobile application that's only going to exist for that weekend. Let's build it serverless and see how that works out. Or we have a review system on our site that customers, sort of reviews to this third party, but let's rebuild our integration into our frontend using serverless for example. There's so many different use cases on these POCs that folks are trying to do.

And ultimately you find that we then end up in a situation where they realize that serverless is incredibly powerful. Their Express or their Django, or their Flask app is running really well. And their POC for the serverless-first application is running incredibly powerful and reliably. Now they start looking at re-architecting their entire stack a lot of the time using serverless as the primary way to do this.

And again, it's very difficult to put a use case there because again, this runs the complete gamut from everything we've spoken about tonight, whether they're integrating WebSockets with parallel compute and Jamstack style applications and so on. And it's a really exciting field to be in because all of this growth in serverless that we're seeing and all these different use cases with organizations out there.

Jeremy: Yeah, no, totally agree. I mean, that's just awesome. I mean, and that's what I love about sort of how serverless works, is that there are a lot of really easy on-ramps, right? I mean, the DevOps piece of it running some of those cron jobs and doing something that's peripheral to your application, or building out a separate microservice that does the reviews or does some sort of integration, or does your PDF generation or that kind of stuff. But it's not touching the main system.

But then starting to build more complete tools, taking advantage of sort of that Lambda lift or that fat Lambda that does a lot of processing for now. But then start breaking it up, use the strangler pattern, start sending things to different services. Yeah that's just awesome. Thank you so much for doing this episode because this is one of those questions where people are like, "Well, what can you do with serverless?" And really it's what can't you do with serverless? And right now we're getting to a point where it's just really there are very few limitations here. Yeah, I just think it's amazing.

Gareth: Yeah. I've had folks ask me, so what can't you do with serverless? And that actually, it's one of the most difficult questions for me to answer. In the past, it used to be that you can't run really long compute and then AWS increased the timeouts to 15 minutes. So then that removes a lot of use cases that you couldn't do anymore. And then they introduced Fargate, so if you'd really had something that you needed to run in the background for a very long period of time, now you can do that with serverless.

So with just the whole industry and the whole architecture is advancing so rapidly and so many new services are coming out. AWS keeps listening... The other vendors too. We've been talking a lot about AWS. The field is growing enormously with other vendors too. Like Azure for example. And Google even making a lot of inroads in their serverless infrastructure that they're building out

Jeremy: Tencent is doing a lot.

Gareth: Tencent is actually busy deploying a lot of cloud services right now. And in fact Serverless Framework, we have support for Tencent because they approached us and said, "Listen, we want to make sure we can do serverless, stuff because this is the way of the future." And that's what they're focused on now.

Jeremy: Yeah. No, and I mean, that's I guess my last point would be just for people that are moving into serverless, trust the services in the cloud. Right? The cloud can do things better than you. So just moving your Express app over into a single Lambda function. Everything like retries and failure modes and some of those other things. There's so much stuff built into the cloud.

So it's not you having to do all of it yourself. There's just a lot of support there. So anyways, Gareth thank you so much for being here. If listeners want to get a hold of you and read some of the great blog posts you have. How do they do that?

Gareth: Well, most of my blog posts are written on the serverless.com websites. And that's easy to find serverless.com/blog. We pretty much update regularly about the serverless framework, new features we're bringing out. And then all the work we're doing at Serverless. For me personally, if anybody wants to get in touch personally, they can get ahold of me on Twitter, it's @garethmcc on Twitter. Nice and easy to find. And yeah, that's really the best way to get in touch with me and see what I've been writing.

Jeremy: Awesome. All right. We will get all of that into the show notes. Thanks again.

Gareth: Awesome. Thanks so much, Jeremy.

THIS EPISODE IS SPONSORED BY: Datadog

Episode source