Twitter | Search | |
Brian Grant
GKE co-TL, Kubernetes Steering Committee member, SIG Architecture co-Chair, CNCF TOC member
453
Tweets
195
Following
3,508
Followers
Tweets
Brian Grant 1h
Replying to @e_k_anderson @jbeda
Those kinds of transactions also make it very difficult to add layers and extensions with similar properties as built-in resource types and operations. In Borg, I found that an unbounded variety of such layers and extensions were necessary. K8s does better, but a few gaps remain
Reply Retweet Like
Brian Grant 2h
Replying to @mchmarny
Thanks. Work on the CNCF Cloud Native definition () reemphasized the importance of familiar examples to me. (Ooh, I just learned a new term: ostensive definition.) I'll try to incorporate more in the future.
Reply Retweet Like
Brian Grant 2h
Replying to @e_k_anderson @jbeda
That may be one reason. Others include authz of fine-grain actions (which one can argue whether is useful or not), imperative maintenance operations like reboot and snapshot, and lack of a mechanism to update associative lists, but mostly just due to a difference in mindset.
Reply Retweet Like
Brian Grant Apr 24
Replying to @embano1 @jbeda
The inconsistency is a bug, but disruptive to fix
Reply Retweet Like
Brian Grant Apr 24
Replying to @jbeda
Unpredictable generated names, imperative set addition and removal, custom verbs, inconsistent metadata. As we can see from K8s v1beta1, consistency doesn't happen unless it's a priority
Reply Retweet Like
Brian Grant Apr 24
Replying to @mhausenblas
Scheduling Unit
Reply Retweet Like
Brian Grant Apr 24
Replying to @bgrant0607
BTW, when I was digging through old docs/decks, I found a diagram from the Dec 2013 API proposal. Sunit->Pod, SunitPrototype->PodTemplate, Replicate->ReplicaSet, Autoscale->HorizontalPodAutoscaler.
Reply Retweet Like
Brian Grant Apr 24
Replying to @bgrant0607
In the next thread, I’ll cover more about configuration itself, such as the origin of kubectl apply
Reply Retweet Like
Brian Grant Apr 24
Replying to @bgrant0607
There are some gaps in the model (e.g., , , , ), but for the most part it facilitates generic operations on arbitrary resource types.
Reply Retweet Like
Brian Grant Apr 24
Replying to @bgrant0607
For the most part, controllers know which fields to propagate from one resource instance to another and wait gracefully on declarative object (rather than field) references, without assuming referential integrity, which enables relaxed operation ordering.
Reply Retweet Like
Brian Grant Apr 24
Replying to @bgrant0607
KRM is consistent and declarative. Metadata and verbs are uniform. Spec and status are distinctly separated. Resource identifiers, modeled closely after Borgmaster’s (), provide declarative names. Label selectors enable declarative sets.
Reply Retweet Like
Brian Grant Apr 24
Replying to @bgrant0607
We folded learnings from these 5+ systems into the Kubernetes Resource Model, which now supports arbitrarily many built-in types, aggregated APIs, and centralized storage (CRDs), and can be used to configure 1st-party and 3rd-party services, including GCP:
Reply Retweet Like
Brian Grant Apr 24
Replying to @jbeda
proposed layering an aggregated config store/service with consistent, declarative CRUD REST APIs over underlying GCP and third-party service APIs. This sort of later evolved into Deployment Manager.
Reply Retweet Like
Brian Grant Apr 24
Replying to @bgrant0607
GCP was comprised of independent services, with some common standards, such as the org hierarchy and authz. They used REST APIs, as the rest of the industry, and gRPC didn't exist yet. But, GCP’s APIs were not natively declarative, and Terraform didn’t exist, either
Reply Retweet Like
Brian Grant Apr 24
Replying to @davidopp
Omega supported an extensible object model, and had proposed putting an API in front of the persistent store, as we later did in Kubernetes, but it wasn't declarative. Separate work on a common configuration store was discontinued as Google Cloud became the focus
Reply Retweet Like
Brian Grant Apr 24
Replying to @bgrant0607
Others, such as for load balancing, built independent services with their own service APIs and configuration mechanisms. This enabled teams to evolve their services independently, but created a heterogeneous, inconsistent management surface.
Reply Retweet Like
Brian Grant Apr 24
Replying to @bgrant0607
Some extensions of the core functionality, such as for batch scheduling and vertical autoscaling, used the Borgmaster as a configuration store by manually adding substructures stored with Job objects, which were then retrieved by polling Jobs.
Reply Retweet Like
Brian Grant Apr 24
Replying to @bgrant0607
The APIs were manually mapped into the two Turing-complete configuration languages, and there was also a hand-crafted diff library for comparing the previous and new desired states. The sets of concepts, RPC operations, and configurable resource types were not easily extended
Reply Retweet Like
Brian Grant Apr 24
Replying to @bgrant0607
Hundreds to thousands of clients interfaced with this API. Many of them were asynchronous controllers or monitoring agents, as discussed in previous threads, and there was a simple command-line tool, and two widely used configuration CLIs
Reply Retweet Like
Brian Grant Apr 24
Replying to @bgrant0607
Like most internal Google services, Borgmaster had an imperative, unversioned, monolithic RPC API built using the precursor to , Stubby. It exposed an ad hoc collection of operations, like CreateJob, LookupPackage, StartAllocUpdate, and SetMachineAttributes
Reply Retweet Like