Kube Cloud Pt4 | Observability
full course- Kube Cloud Pt4 | Observability
- Kube Cloud Pt4 | Logging
- Kube Cloud Pt4 | Tracing
- Kube Cloud Pt4 | Metrics
For this portion we’re going to use OpenTelemetry for tracing. OpenTelemetry projects intent is to solve all of the observability space in an opensource way. Unfortunately, at this time the logging and metrics portions are still in development, so we won’t be able to use them. However the tracing piece works well. In order to get our opentelementry integration with logz.io working we’ll have to install a service into kubernetes. This service is the tracing collector which the services will send their tracing information to. The collector then pushes the traces over to logz.io. This takes the burden off of the microservices so they can focus on their core functionality.
Installing the Collector
Navigate to the tracing setup in logz.io
Choose the OT installation
Select the Kubernetes tab and you should see that they’re using helm to install the service. Very convenient.
Go ahead and install their repo and get the latest charts
helm repo add logzio-helm https://logzio.github.io/logzio-helm
helm repo update
and then install the chart per their instructions
helm install \ --set config.exporters.logzio.region=us \ --set config.exporters.logzio.account_token=... \ logzio-otel-traces logzio-helm/logzio-otel-traces
You should see it startup in okteto and pumping data over (there’s no data to pump yet)
Install the OpenTelemetry Agent
Update your dockerfile to pull down the opentelemetry agent and add it to the java options
FROM openjdk:15.0.2-slim-buster
COPY --from=build target/*.jar app.jar
ADD https://github.com/open-telemetry/opentelemetry-java-instrumentation/releases/download/v1.11.0/opentelemetry-javaagent.jar /opt/opentelemetry-javaagent.jar
ENV JAVA_TOOL_OPTIONS=-javaagent:/opt/opentelemetry-javaagent.jar
ENTRYPOINT ["java", "-jar", "app.jar"]
Update Kubernetes Deployment
We also need to update the kubernetes deployment to point the agent to our collector. Add this environment configuration to the deployment.yaml
in helm/cloud-application-templates
env:
- name: OTEL_SERVICE_NAME
value: {{ include "cloud-application.fullname" . }}
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: "http://logzio-otel-traces:4317"
- name: SPRING_DATA_MONGODB_URI
valueFrom:
secretKeyRef:
name: mongo-secrets
key: SPRING_DATA_MONGODB_URI
That should be it. You can’t test locally (unless you’re deploying into kubernetes locally with something like minikube) so you’ve got to build and deploy into okteto. Might as well push to main
git add .
git commit -m "opentelemetry"
git push
You should be able to find traces in jaeger now
You should be able to see the flame graphs that show you the interactions between the services. Make a request to cloud application without a message so that it calls through to message generator and locate that request in jaeger
Select that request and you should see the flame graph populate
This allows you to see the entire call trace and the duration of the calls.
Add Tracing Fields to Logs
One more thing we can do is add the trace data to the logs. Although this isn’t necessarily important because logz.io links the logs to the traces, we may want to see this data and it doesn’t take much effort.
Modify the pattern layout to add the trace and span ids in the logback-spring.xml
<Pattern>
%black(%d{ISO8601}) %highlight(%-5level) [%blue(%t)] [%cyan(%X{trace_id}),%cyan(%X{span_id})] %yellow(%C{1.}): %msg%n%throwable
</Pattern>
0 comments on “Kube Cloud Pt4 | Tracing”Add yours →