Tanzu Community Edition - first try!

Today we're gonna take a look at the Tanzu Community Edition which can be freely downloaded from tanzucommunityedition.io/download , no logins needed, just download. Based on the platform, download the necessary binary. I am using a Mac so I downloaded the tce-darwin-amd64-v0.11.0.tar.gz file. Note on the usage of clusters on M1 Macs on the same page. Note that this needs the tanzu CLI available on your system, if you dont, just unzip the binary package and run through the install.sh script which will install the latest version of the CLI as well.

$ tar xzvf tce-darwin-amd64-v0.11.0.tar.gz


./install.sh
====================================
 Installing Tanzu Community Edition
====================================

Installing tanzu cli to /usr/local/bin/tanzu
Password:

Installing plugin 'builder:v0.11.2'
✔  successfully installed 'builder' plugin
Installing plugin 'codegen:v0.11.2'
✔  successfully installed 'codegen' plugin
Installing plugin 'cluster:v0.11.2'
✔  successfully installed 'cluster' plugin
Installing plugin 'kubernetes-release:v0.11.2'
✔  successfully installed 'kubernetes-release' plugin
Installing plugin 'login:v0.11.2'
✔  successfully installed 'login' plugin
Installing plugin 'management-cluster:v0.11.2'
✔  successfully installed 'management-cluster' plugin
Installing plugin 'package:v0.11.2'
✔  successfully installed 'package' plugin
Installing plugin 'pinniped-auth:v0.11.2'
✔  successfully installed 'pinniped-auth' plugin
Installing plugin 'secret:v0.11.2'
✔  successfully installed 'secret' plugin
Installing plugin 'conformance:v0.11.0'
✔  successfully installed 'conformance' plugin
Installing plugin 'diagnostics:v0.11.0'
✔  successfully installed 'diagnostics' plugin
Installing plugin 'unmanaged-cluster:v0.11.0'
✔  successfully installed 'unmanaged-cluster' plugin

Installation complete!

Add the path to your $PATH

sudo nano /etc/paths

Check if the Tanzu CLI is working.

tanzu --help

Tanzu CLI

Usage:
  tanzu [command]

Available command groups:

  Admin
    builder                 Build Tanzu components
    codegen                 Tanzu code generation tool

  Run
    cluster                 Kubernetes cluster operations
    conformance             Run Sonobuoy conformance tests against clusters
    diagnostics             Cluster diagnostics
    kubernetes-release      Kubernetes release operations
    management-cluster      Kubernetes management cluster operations
    package                 Tanzu package management
    secret                  Tanzu secret management
    unmanaged-cluster       Deploy and manage single-node, static, Tanzu clusters.

  System
    completion              Output shell completion code
    config                  Configuration for the CLI
    init                    Initialize the CLI
    login                   Login to the platform
    plugin                  Manage CLI plugins
    update                  Update the CLI
    version                 Version information


Flags:
  -h, --help   help for tanzu

Use "tanzu [command] --help" for more information about a command.

Not logged in

Note the version on the CLI.

tanzu version
version: v0.11.2
buildDate: 2022-03-24
sha: 6431d1d

Let's create our first unmanaged-cluster.

tanzu unmanaged-cluster create firstcluster


📁 Created cluster directory

🔧 Resolving Tanzu Kubernetes Release (TKR)
   projects.registry.vmware.com/tce/tkr:v0.17.0
   Downloaded to: /Users/arathod/.config/tanzu/tkg/unmanaged/bom/projects.registry.vmware.com_tce_tkr_v0.17.0
   Rendered Config: /Users/arathod/.config/tanzu/tkg/unmanaged/firstcluster/config.yaml
   Bootstrap Logs: /Users/arathod/.config/tanzu/tkg/unmanaged/firstcluster/bootstrap.log

🔧 Processing Tanzu Kubernetes Release

🎨 Selected base image
   projects.registry.vmware.com/tce/kind:v1.22.4

📦 Selected core package repository
   projects.registry.vmware.com/tce/repo-10:0.10.0

📦 Selected additional package repositories
   projects.registry.vmware.com/tce/main:v0.11.0

📦 Selected kapp-controller image bundle
   projects.registry.vmware.com/tce/kapp-controller-multi-pkg:v0.30.1

🚀 Creating cluster firstcluster
   Cluster creation using kind!
   ❤️  Checkout this awesome project at https://kind.sigs.k8s.io
failed to create cluster, Error: system checks detected issues, please resolve first: [minimum 2 GiB of memory is required]
Error: exit status 7

✖ exit status 7

Uh-oh, I realized, since my last Docker update, the preferences have turned down the Memory from my usual 6GB to 2GB, I tune that back again from the preferences and try again.

tanzu unmanaged-cluster create secondcluster

📁 Created cluster directory

🔧 Resolving Tanzu Kubernetes Release (TKR)
   projects.registry.vmware.com/tce/tkr:v0.17.0
   TKR exists at /Users/arathod/.config/tanzu/tkg/unmanaged/bom/projects.registry.vmware.com_tce_tkr_v0.17.0
   Rendered Config: /Users/arathod/.config/tanzu/tkg/unmanaged/secondcluster/config.yaml
   Bootstrap Logs: /Users/arathod/.config/tanzu/tkg/unmanaged/secondcluster/bootstrap.log

🔧 Processing Tanzu Kubernetes Release

🎨 Selected base image
   projects.registry.vmware.com/tce/kind:v1.22.4

📦 Selected core package repository
   projects.registry.vmware.com/tce/repo-10:0.10.0

📦 Selected additional package repositories
   projects.registry.vmware.com/tce/main:v0.11.0

📦 Selected kapp-controller image bundle
   projects.registry.vmware.com/tce/kapp-controller-multi-pkg:v0.30.1

🚀 Creating cluster secondcluster
   Cluster creation using kind!
   ❤️  Checkout this awesome project at https://kind.sigs.k8s.io
   Base image downloaded
   Cluster created
   To troubleshoot, use:

   kubectl ${COMMAND} --kubeconfig /Users/arathod/.config/tanzu/tkg/unmanaged/secondcluster/kube.conf

📧 Installing kapp-controller
   kapp-controller status: Running

📧 Installing package repositories
   Core package repo status: Reconcile succeeded

🌐 Installing CNI
   calico.community.tanzu.vmware.com:3.22.1

✅ Cluster created

🎮 kubectl context set to secondcluster

View available packages:
   tanzu package available list
View running pods:
   kubectl get po -A
Delete this cluster:
   tanzu unmanaged delete secondcluster

Voila! we have an Tanzu Community Edition unmanaged-cluster up and running. Notice the change in the CNI from Antrea to Calico. Take a look at github.com/vmware-tanzu/community-edition/r.. for release notes.

tanzu unmanaged-cluster list
  NAME           PROVIDER
  firstcluster   kind
  secondcluster  kind

Looking at the contexts, we see that the current context is set to secondcluster.

kubectl config get-contexts

Let's see the nodes on this cluster. If you scroll up while you were deploying the unmanaged cluster, it would show the config file being used

/Users/arathod/.config/tanzu/tkg/unmanaged/secondcluster/config.yaml

Feel free to check that out.

In there you would see ControlPlaneNodeCount: "1" & WorkerNodeCount: "0".

kubectl get nodes
NAME                          STATUS   ROLES                  AGE     VERSION
secondcluster-control-plane   Ready    control-plane,master   5m22s   v1.22.4

Checking out all the namespaces in the cluster.

kubectl get ns -A
NAME                        STATUS   AGE
default                     Active   6m59s
kube-node-lease             Active   7m1s
kube-public                 Active   7m1s
kube-system                 Active   7m1s
local-path-storage          Active   6m56s
tanzu-package-repo-global   Active   6m46s
tkg-system                  Active   6m46s

Taking a look under the hood for all the CRDs being created.

kubectl get crds
NAME                                                     CREATED AT
apps.kappctrl.k14s.io                                    2022-04-07T23:27:51Z
bgpconfigurations.crd.projectcalico.org                  2022-04-07T23:29:45Z
bgppeers.crd.projectcalico.org                           2022-04-07T23:29:45Z
blockaffinities.crd.projectcalico.org                    2022-04-07T23:29:45Z
caliconodestatuses.crd.projectcalico.org                 2022-04-07T23:29:45Z
clusterinformations.crd.projectcalico.org                2022-04-07T23:29:44Z
felixconfigurations.crd.projectcalico.org                2022-04-07T23:29:44Z
globalnetworkpolicies.crd.projectcalico.org              2022-04-07T23:29:46Z
globalnetworksets.crd.projectcalico.org                  2022-04-07T23:29:45Z
hostendpoints.crd.projectcalico.org                      2022-04-07T23:29:48Z
internalpackagemetadatas.internal.packaging.carvel.dev   2022-04-07T23:27:51Z
internalpackages.internal.packaging.carvel.dev           2022-04-07T23:27:51Z
ipamblocks.crd.projectcalico.org                         2022-04-07T23:29:48Z
ipamconfigs.crd.projectcalico.org                        2022-04-07T23:29:44Z
ipamhandles.crd.projectcalico.org                        2022-04-07T23:29:44Z
ippools.crd.projectcalico.org                            2022-04-07T23:29:47Z
ipreservations.crd.projectcalico.org                     2022-04-07T23:29:47Z
kubecontrollersconfigurations.crd.projectcalico.org      2022-04-07T23:29:47Z
networkpolicies.crd.projectcalico.org                    2022-04-07T23:29:47Z
networksets.crd.projectcalico.org                        2022-04-07T23:29:48Z
packageinstalls.packaging.carvel.dev                     2022-04-07T23:27:51Z
packagerepositories.packaging.carvel.dev                 2022-04-07T23:27:52Z

Note the CNI is calico.

kubectl get all -n kube-system
NAME                                                      READY   STATUS    RESTARTS   AGE
pod/calico-kube-controllers-7bc46c76c-7m6hf               1/1     Running   0          5m26s
pod/calico-node-6wx96                                     1/1     Running   0          5m26s
pod/coredns-78fcd69978-hflvf                              1/1     Running   0          7m21s
pod/coredns-78fcd69978-r8fmp                              1/1     Running   0          7m21s
pod/etcd-secondcluster-control-plane                      1/1     Running   0          7m36s
pod/kube-apiserver-secondcluster-control-plane            1/1     Running   0          7m36s
pod/kube-controller-manager-secondcluster-control-plane   1/1     Running   0          7m37s
pod/kube-proxy-ctwnz                                      1/1     Running   0          7m21s
pod/kube-scheduler-secondcluster-control-plane            1/1     Running   0          7m35s

NAME               TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
service/kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   7m36s

NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
daemonset.apps/calico-node   1         1         1       1            1           kubernetes.io/os=linux   5m26s
daemonset.apps/kube-proxy    1         1         1       1            1           kubernetes.io/os=linux   7m36s

NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/calico-kube-controllers   1/1     1            1           5m26s
deployment.apps/coredns                   2/2     2            2           7m36s

NAME                                                DESIRED   CURRENT   READY   AGE
replicaset.apps/calico-kube-controllers-7bc46c76c   1         1         1       5m26s
replicaset.apps/coredns-78fcd69978                  2         2         2       7m21s

Now let's create a deployment to see how it works,

kubectl create deployment nginxdeployment1 --image=nginx --replicas=3

kubectl get deploy
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginxdeployment1   3/3     3            3           24m

Cool!