K8S (Kubernetes) Notes
Summary of commands I’ve used or think are useful
Set your default namespace
You do this so you don’t need to append --namespace MyNameSpace
or -nMyNameSpace
to
all of your kubectl
commands.
# This displays current default namespace
kubectl config current-context
# This changes it
kubectl config set-context --current --namespace=$MyNameSpace
# Verify it was changed
kubectl config current-context
# Run any command to verify it is working as expected
kubectl get pods
NOTE: Later I’ll show how to use the ksn
command to do this.
Get a list of deployments
kg deployment
kubectl get --namespace XYZ deployments
# NOTE: You can set a default namespace with `ksn` or `kubeens`
Get a list of pods
kg pods
Display the log file for one pod (has a sidecar so $C lists the sidecar’s logs to view)
setP pod-name
kl $P $C
View the detailed description of the pod
k describe pod $P
kg pod $P -o yaml
View the FQDN and paths used for this ingress
kg ingress $PP -o yaml
List the configurations and secrets
kg configmap kg secrets
Dump the value of a configmap or secret
k describe configmap MYCONFIGMAP kg secret MYSECRET -o yaml kg nodes
Delete a pod that is acting up (it’ll get restarted)
k delete pod $P
Restart (Bounce) an App
I’ve changed my configmap and need to restart my application to pick up the change, how do I do this?
TLDR: Using the rollout restart
kubectl get deployments
kubectl rollout restart deployment/$DEPLOYMENT_NAME
For a complete list of the different ways to bounce a POD see below:
- Delete the pod - and Kubernetes will recreate it!
kubectl delete pod $POD_NAME
- Restart the app the proper way. This allows K8S to start the new application
and switch to it ensuring zero or minimal down time.
kubectl get deployments kubectl rollout restart deployment/$DEPLOYMENT_NAME
- If you want to shut the POD/application down for a bit and then make the
configmap change, or just want to shut it down for a bit you can do this.
kubectl scale deployment
--replicas=0 kubectl scale deployment $DEPLOYMENT_NAME --replicas=0 kubectl scale deployment $DEPLOYMENT_NAME --replicas=$WANT_COUNT
- Updating environment variables associated with a POD can trigger a restart.
kubectl set env pod $POD_NAME KEY=VALUE
My Notes on setting up K8S aliases
The latest way I do this is updating my $HOME/.bashrc
file as follows:
# Set N=-nNamespace if N isn't set then no harm, no namespace will be used
alias k='kubectl $N'
alias kg='kubectl get $N'
alias ka='kubectl apply $N'
alias kl='kubectl logs $N'
# Set P so I can use commands like kl $P $C
# For example,
# P=ki-v2-userprofile-service-5844b866d-q98w5
# C=ki-v2-userprofile-service
function setP(){
P=$1
#C="-c ${P::-17}" # Didn't work all the time, last random numbers changed over time to be 18 chars
PP=$(kg pod $P -o jsonpath="{.spec['containers'][*].name}" | xargs -n1 | grep -v linkerd)
C="-c $PP"
}
function ksn(){
if [ "$1" = "" ]
then
kubectl config view -v6 2>&1 | grep 'Config loaded from file:' | sed -e 's/.*from file: /Config file:/'
echo Current context: $(kubectl config current-context)
echo Default namespace: $(kubectl config view --minify | grep namespace: | sed 's/.*namespace: *//')
echo Custom namespace: N=$N
elif [ "$1" = "--unset" ]
then
kubectl config set-context --current --namespace=
else
kubectl config set-context --current --namespace=$1
fi
}
To use it, I do
ksn
Config file: /user/user1/.kube/config
Current context: dev
Default namespace: my-app-ns
Custom namespace: N=
# View the PODS
kg pods
NAME READY STATUS RESTARTS AGE
my-app-8b7b74d78-vvl5h 2/2 Running 0 28d
# To change the namespace I can use
ksn my-other-ns
Context "dev" modified.
ksn
Config file: /user/user1/.kube/config
Current context: dev
Default namespace: my-other-ns
Custom namespace: N=
kg pods
NAME READY STATUS RESTARTS AGE
my-other-app1-7ff9cb7fb8-2mxlb 2/2 Running 0 36h
Other
(Q) What is the equiv a of docker inspect? kubectl get -o json pod thePodName kubectl describe pod nginx-deployment-1006230814-6winp
(Q) How can I see the hostname and path mappings that are defined on a K8S app? (A) The mappings are defined in a K8S ingress object which can be viewed using
kubectl describe ingress $N ingress-my-app
Name: ingress-my-app
Namespace: my-app-ns
Address: k8s-ingressn-ingressn-351333f0b5-f4e829cce64a8bc6.elb.us-east-1.amazonaws.com
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
tls-secret terminates my-app.example.com
Rules:
Host Path Backends
---- ---- --------
my-app.example.com
/app1 svc-my-app-components:8443 (10.114.235.31:8443)
/app2 svc-my-app-components:8443 (10.114.235.31:8443)
/app3 svc-my-app-components:8443 (10.114.235.31:8443)
Annotations: field.cattle.io/publicEndpoints:
[{"addresses":[""],"port":443,"protocol":"HTTPS","serviceName":"my-app-ns:svc-my-app-components","ingressName":"my-app-ns:ingress-k...
kubernetes.io/ingress.class: nginx
meta.helm.sh/release-name: my-app
meta.helm.sh/release-namespace: my-app-ns
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
Events: <none>
Configure HTTPS certificate to be used in K8S
Below are the important bits as I know them:
- You need to upload your certificate to K8S as a certificte and use that name when configuring your ingress object.</br>
- In Rancher, I used Cluster Explorer. Clicked on Secrets, and then Create and clicked on Create: TLS Certificate.
- I then uploaded my cert as a PEM file, and a PEM private key (not passphrase protected)
- Update the helm charts ingres object to use the cert just uploaded. (pats-example-com-cert) NOTE: This configuration was in the values.yaml file. ``` service: type: ClusterIP port: 443
ingress: enabled: true className: “nginx” hosts: - host: pats.example.com paths: - path: / pathType: Prefix service: name: myapp-service tls: - secretName: pats-example-com-cert hosts: - pats.example.com
# Configuring Resources
On one K8S, I needed to configure the resources for the app as follows:
Again, this configuration was in the values.yaml file.
resources: limits: cpu: 1 memory: 1Gi requests: cpu: 500m memory: 750Mi
# Setup $HOME/.kube/config file
1. Login to Rancher
2. Navigate to the Cluster
- Click on the Cluster from which you want to get the kubeconfig file.
3. In the top right corner, click Kubeconfig File (or similar, depending on your Rancher version).
This will generate a kubeconfig file that you can download.
4. Store that file in ~/.kube/config
After this you should be able to run
kubectl get pods -n ```