The Lambda runtime only lets you execute a single request at a time, so you won’t be able to take advantage of the asynchronous nature of modern web requests.

Somewhat true; a single lambda container will process requests one at a time. Lambda containers will scale out to match demand, however, up to the concurrency limit you set, to a maximum of 1000.
Any in-process caching will be wasted
Caching can still be worthwhile depending on your use case. Certainly not as effective as a steady state solution.
Connecting Lambda to a relational database like SQL Server, PostgreSQL or MySQL requires a Virtual Private Cloud (VPC) and a NAT Gateway which has a flat base cost around $40/month per instance. Without the NAT Gateway, the Lambda would not be able to call out to any AWS services (including CloudWatch for logging).
You should qualify this as a relational database running inside of a VPC and not exposed to the internet (which, to be fair, is best practice). If you need your lambda to communicate with infrastructure inside of a VPC, it must be in the VPC, and if it is in the VPC, it cannot call out to the internet, which is where AWS service endpoints live. You can mitigate the VPC issue without a NAT Gateway by using an RDS Proxy, but there is additional cost.
Another huge downside not mentioned is that running code that interacts with a database with stateful connections (basically, any RDBMS) prevents effective connection pooling by the application code. You can also mitigate this with an RDS Proxy, which pools connections internally.
THe caching comment confuses me. You wouldn’t ever cache things on your app server either. You’d use a caching layer instead that is shared between servers in your cluster/farm. Auto-scaling an app server that has in memory caching will leave you scratching your head with the bugs it creates. Been there and done that unfortunately lol.
Lambda isn’t any different. Don’t cache in your Lambda.
Startup/initial setup and configure can still be done no differently than your server. The lambda will live for 15 minutes if there isn’t any traffic to it. If there is traffic the same lambda runs each time. You won’t have to pay that startup config cost on each request.
VPC performance isn’t the best. If you are building net new and going with lambda as your backend then going with something else like dynamo avoids that issue. If you are integrating with existing services in your vpc then you’re already paying for a NAT gateway most likely. I’ve worked around the vpc performance issue by marshaling things out of the vpc into dynamo via SNS. You can have a container pooling your sql environment and pushing changes to sns or just update the existing services in your vpc to publish to sns when they insert/update the relational dB. Now you have a lambda running outside of your vpc subscribed and dumping data into dynamo for your api to query. Keeps aspnet outside the vpc and helps with the cold starts.

I’ve done this now on multiple apps at my enterprise and it’s worked great. There can be a small delay making data from a vpc service available via the api but that’s typically not a show stopper.
I appreciate the article and the proof of concept, but this seems like a really bad idea. I would never put a system like this in production
C# devs
null reference exceptions

source