Spring Application Deployed with Kubernetes
Step by step building an application using Spring Boot and deployed via Docker on Kubernetes with Helm
full course- Setup: IDE and New Project
- Create the Data Repository
- Building a Service Layer
- Create a REST Controller
- Logging, Tracing and Error Handling
- Documentation and Code Coverage
- Database as a Service
- Containerize the Service With Docker
- Docker Registry
- Automated Build Pipeline
- Helm for Deployment
- Setting up a Kubernetes Cluster
- Automating Deployment (for CICD)
- System Design
- Messaging and Event Driven Design
- Web UI with React
- Containerizing our UI
- UI Build Pipeline
- Put the UI in to Helm
- Creating an Ingress in Kubernetes
- Simplify Deployment
- Conclusion and Review
Finally, we’re going to be able to deploy our application. We need to get access to a cluster first.
Install Tooling
We’re going to need more tools in order to get started. Use your OS package management tool to install these tools:
- Kubectl (Interact with a k8s instance)
- Minikube (Run a cluster locally)
I’m not going to cover how to install these tools, there’s plenty of documentation out there and it would take too much time to cover.
When you startup minikube it will automatically update your kube config file with the correct settings to talk to minikube.
Kubectl is the tool that we’ll use to interact with k8s. Specifically, helm will be using it to apply our deployment files that helm generates.
Setting up Security
First we want to setup some secrets to hold configuration that we don’t want to check into git. We could create deployment files in helm and let helm do it for us, but that runs the risk that we accidentally check in a file that has credentials in it. So lets do that by hand for now.
Docker Registry
kubectl create secret docker-registry regcred --docker-server=r.cfcr.io --docker-username=Mcodefresh username> --docker-password=<code fresh docker access key>
This step tells minikube how to connect to codefresh in order to pull down the images we’ve built.
Database Credentials
kubectl create secret generic database-secrets --from-literal=SPRING_DATASOURCE_PASSWORD=<postgres user> --from-literal=SPRING_DATASOURCE_USERNAME=<postgres password>
We’re going to store the database credentials in a secret which makes them available to our deployment files later.
Update Helm Template
We need to add some more configuration to helm. Open deployment.yaml
and update with these additions:
(line 24)
spec:
imagePullSecrets:
- name: regcred
(line 51)
{{- range $key, $value := .Values.extraEnv }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
{{- if .Values.secretsToEnv }}
{{ toYaml .Values.secretsToEnv | indent 12 }}
{{- end }}
Be very careful of editing yaml. Its very sensitive to indentation. Double check my branch to see what the correct yaml file should look like.
We also need to update the values.yaml
to add the mapping for our database credentials. Add this under extraEnv: {}
extraEnv: {}
secretsToEnv:
- name: SPRING_DATASOURCE_PASSWORD
valueFrom:
secretKeyRef:
name: database-secrets
key: SPRING_DATASOURCE_PASSWORD
- name: SPRING_DATASOURCE_USERNAME
valueFrom:
secretKeyRef:
name: database-secrets
key: SPRING_DATASOURCE_USERNAME
Here we’re reading the secretes file and injecting them into the pod environment. This snake case naming is a spring convention that matches the dot notation in an application properties file. We’re essentially writing
spring.datasource.username
andspring.datasource.password
. But it a very secure way to do it (as secure as k8s secrets are)
The final thing we need to do is point our image configuration at a valid tag. Go into your codefresh docker image registry and pull a tag off of the latest build image
and put that into the values.yaml
image:
repository: r.cfcr.io/docketdynamics/medium-customer
tag: "4165814"
Lets Finally Deploy
Everything should be setup now. Lets fire up helm and deploy into minikube
helm install medium src/main/helm/medium-customer
This is telling helm to startup a ‘release’ called
medium
using the template defined in that directory.
startup minikube’s k8s dashboard in a new window
minikube dashboard
and watch the pods tab to see if our image was pulled down correctly and the pod started up.
We can also get the ip for our cluster
minikube ip
and use that in a POST request with our cluster ip and the port we’ve defined this service to listen on
POST /customer/ HTTP/1.1
Host: 172.17.144.73:30001
Content-Type: application/json
{
"firstName": "Brian",
"lastName": "Rook",
"phoneNumber": "(303)868-5511",
"email": "[email protected]"
}
We should get a 201 back and we can log into the database via the DB Browser to confirm our record was stored.
Build and Commit
git checkout -b deploy
mvn clean install
git add .
git commit -m "deploy to minikube"
git push
git checkout master
git merge deploy
git push
0 comments on “Setting up a Kubernetes Cluster”Add yours →