Serverless or CNCF Kubernetes?
Reacting to Go serverless or stay on Kubernetes for deploying microservices?
Published: Monday, Jan 31, 2022 Last modified: Thursday, Nov 14, 2024
Update: Noticed this also caught the eye of Off-by-none: Issue #174 and Charles Chen, who pedantically focus on containers which I personally care less about as a “static binary kindof guy”.
My main feedback is that it appears to compare the two platforms in the context of heavy loads and microservices.
- heavy loads – not the typical use case
- microservices – serverless I feel (perhaps wrongly) is cast as a “microservice” platform. It’s not! I personally run entire monoliths on serverless platforms, no problems.
In comparison to Kubernetes
Scaling
https://youtu.be/qjTiNTu1A9w?t=1499
In my experience the K8S autoscaler is complex, unreliable and doesn’t scale without capacity planning. Serverless is much simpler and it scales without thinking about it.
For example with k8s you typically need some extra nodes to scale. So in your cluster you are paying for resources you do not use.
Lambda can burst to 3000 concurrent requests and then 500 requests per minute (can be increased) thereafter. A k8s cluster can not, unless you over-provision.
Observability
https://youtu.be/qjTiNTu1A9w?t=1559
Kubernetes observation is complex and the lines are often drawn between App and Ops. With serverless one developer can do both thanks to the shedding of k8s complexity.
Serverless applications typically utilise the native cloud native solution like AWS Cloudwatch. No need for an “observability team”.
Deployment strategies
https://youtu.be/qjTiNTu1A9w?t=1682
Deploying with Serverless is radically simpler compared to different HTTP ingress / meshing solutions for k8s. Furthermore deploying and rolling back serverless functions typically take <5s!
Measure a roll out an Application on a typical k8s cluster. It will not be less than five seconds!
Infrastructure costs
https://youtu.be/qjTiNTu1A9w?t=1828
1.5 millions transactions with three seconds execution time I think is unrealistic. Three seconds for a transaction is unacceptable for most Web applications. In startups I’ve worked at, the upper threshold was more like 150ms.
Total cost of Ownership
Although “serverless” won the contest above, in my experience, employing the “devops” teams running Kubernetes can be huge.
I would go so far as to say that K8s is the perfect toy for an engineer who doesn’t want to work on business features. I see a lot of cases where the K8s cluster becomes a Pandora’s box of busywork: one more version upgrade, one more CRD, one more CNCF landscape addon… etc
— Nathan Peck (@nathankpeck) December 3, 2021
The k8s maintenance cost per month of ~4k is unrealistic.
I have observed Kubernetes setups where there is a team to manage the k8s cluster, an ops team to manage the running applications and an observability team. Nevermind the application developers!
The above serverless maintenance (and infrastructure) cost of ~3k is also unrealistic, it’s closer to single digit dollars in my experience.
It’s not atypical to see three engineers able to develop, run and observe their application on the serverless platform, this a dramatic saving to TCO!
The TCO chart in my mind could be better focused on accounting for people’s time.
Reacting to the Key takeaways
Standardization. Vendor Lock-In.
Standardization/vendor lock-in: there is no Cloud Native Computing Foundation (CNCF)-backed serverless codebase like there is for Kubernetes. Each provider has its own implementation and features. You will need to adapt to these differences.
There are libraries to pave over vendor specific interfaces & expose a typical http interface, like serverless-express or Go’s Lambda Gateway. The cost is a couple of lines of code and conforming to the standard HTTP interface.
if awsDetected {
err = gateway.ListenAndServe("", s.router)
} else {
err = http.ListenAndServe(fmt.Sprintf(":%s", os.Getenv("PORT")), s.router)
}
Do please have a look at my article where I can the same code running on all major cloud vendors with minimal code changes.
Execution time
RE “Vendor Lock-In” and No CNCF mandated “standardization”, such as “Execution time” can be different between vendors. Who cares if it’s 9 or 30 minutes, when you are trying to serve a request as quick as possible? My personal target is to be below 30ms, not 30 minutes!
Cold starts
“Cold starts” are moot if your application is in use. Furthermore cold starts are getting faster and faster over time… < 0.5s!
And as mentioned in the original article, there are good workarounds like AWS provisioned concurrency if your endpoint is infrequently used and you need to save half a second.
Limited support
Essentially no “language support” is needed if there is a clear interface to running a native binary. This has been shown with AWS’s new “provided.al2” (Graviton runtime) which didn’t support any language initially.
Properties:
Architectures:
- arm64
Handler: main
Runtime: provided.al2
Serverless doesn’t require a bloated packaging format like Docker images. Essentially it can be an ELF binary, so it plays well with good practices like shipping one static binary.
Security
The benefits of out of the box serverless security are understated. Why?
- Drastically simpler than Kubernetes
- Not long running (ephemeral)
- Managed underlying OS
If you’re concerned about “multi-tenancy” you must have same qualms about running EC2 workloads? It’s the same technology.
Conclusion
Despite my (overly) enthusiastic comments, I think we are all coming around to the stark benefits of “serverless”. Yes, we do have to think differently, the mindset of fast booting http transactions, how we structure service teams and how we ship code.
For most workloads including monoliths and heavy loads, utilising serverless is a great start! Go build!