Spring Application Deployed with Kubernetes
Step by step building an application using Spring Boot and deployed via Docker on Kubernetes with Helm
full course- Setup: IDE and New Project
- Create the Data Repository
- Building a Service Layer
- Create a REST Controller
- Logging, Tracing and Error Handling
- Documentation and Code Coverage
- Database as a Service
- Containerize the Service With Docker
- Docker Registry
- Automated Build Pipeline
- Helm for Deployment
- Setting up a Kubernetes Cluster
- Automating Deployment (for CICD)
- System Design
- Messaging and Event Driven Design
- Web UI with React
- Containerizing our UI
- UI Build Pipeline
- Put the UI in to Helm
- Creating an Ingress in Kubernetes
- Simplify Deployment
- Conclusion and Review
Up to this point we’ve been writing, testing, building, deploying and integrating services. Lets take a moment to reflect on what we have accomplished and why.
Here we are at 22 articles in. That seems like a lot, but I wanted to write these articles in an easily ingested way that allows you to see progress each step of the way. It seems like we’ve done a lot and we should review what we did and why.
Microservices
We’re using microservices both for the back end as well as the front end. This is important to understand. We’re choosing this architecture for a few good reasons and accepting some significant drawbacks while we do that.
Advantages
Microservices are smaller to write, they’re easier to develop, test and build. Additionally, they’re easier to read, update and debug. If we design with the single responsibility principle in mind (i.e. A class (service or system) should have only one reason to change) we have more flexibility to evolve our system.
Disadvantages
We’re going to have LOTS of services and this is a HUGE problem. Lots of services means lots of maintenance, lots of monitoring, lots of service governance, lots of building, lots of deploying. As you can see there’s a lot of work that goes into building and deploying a service with the helm deployment descriptors and the pipeline workflow.
Writing and developing services this way is also a huge shift in the role of the software engineer. I strongly feel that software engineers must own large aspects of the testing, building, deploying and monitoring of their systems when they go this route. There are way too many moving parts to throw testing over the wall to the QA team, creating dependencies on devOps to build and deploy your application and relying on production reliability teams to monitor the state of the system. Software engineers MUST become experts in the components of their application and MUST be involved in all aspects of the software development lifecycle from product vision to production monitoring.
It can seem like there’s a lot of disadvantage and not much advantage here. That’s true if you don’t consider the power of the tools that we’re using.
Using The Platform to Our Advantage
Software development has been evolving over the past few decades of web based software applications. Significant changes have created huge improvements in productivity which engineers can leverage to absorb their new responsibilities without drowning under the load.
Frameworks
Software frameworks have reduced the amount of code that is needed to support application and service functionality. This is significant because previously all of the functionality that these frameworks provide used to be built in-house which meant that you ended up building larger applications and services. Since the ‘utility’ logic of documentation, testing, security, logging, etc… is not core to the business it tended to get written once and ignored because the main priority of the business is on the product logic. You didn’t want to ‘spread around’ flaky utility logic by making lots of services because upgrading that logic was cumbersome.
Frameworks like Spring and React (and other front end frameworks) have basically eliminated the need to write and maintain that code. Framework developers are more focused on their features (the utility logic) so it is better designed, implemented and tested. That means that we can write smaller services because we have a solid foundation on which to start our new service development.
Testing frameworks also fall under this category. Tools like rest template, wiremock, H2, etc… allow software delivery team members to write meaningful, behavior driven tests without needing to depend on a QA team to test our application (although their system knowledge and edge case knowledge can definitely make software engineering developed tests better).
Build Tools and Pipelines
Tools like maven, grade and npm eliminate the need to write custom build logic for each service. This means that we can support more applications or services because they generally all operate off the same convention and produce the same product consistently.
Additionally pipeline tools have evolved to the point where a software delivery team member can effectively write and support their application build and deployment without the need of a devOps specialist (although their help is definitely needed and greatly appreciated).
Again, these automation tools allow software delivery engineers to be able to maintain more services, so we can write smaller services.
Containerization and Virtualization
It cannot be overstated about how important containerization has been to give the ability to provide a consistent deliverable artifact. We can (almost) eliminate all of the wasted time related to “it works on my machine”. With containerization we can segment our application and prioritize a homogeneous stack or if the need arises we can switch to a heterogeneous stack. We can evolve individual services independently, which means that (again) we can support more (and/or smaller) deliverable artifacts.
Finally, we have a new language to communicate with our devOps peers. We don’t have to talk about how to build and configure artifacts if we can deliver a container that is capable of running on its own. We can focus on an entirely new deliverable and own more of the workflow that generates it.
Container Management and Orchestration
Containers are fine, but when we have more (smaller) applications and services we need a platform that can manage their lifecycle (killing unresponsive instances, starting up new instances, integration and communication between those instances, etc…). The platform concept arose very quickly after containerization because it was a necessity. As software engineers we’re not usually exposed to the operational aspect and monitoring as much. Primarily this was as aspect of the complexity of the system. Operations focused on physical servers, networking, operating systems and security. Application deployment was just another task for them and it was usually a difficult one (depending on the quality of the software teams). However, most of these tasks (server setup and configuration, network communication, etc…) has been abstracted out to the platform. This means that software delivery team members that have enough foresight (and capacity) to learn about these systems become extra valuable and more productive because they can work directly with their devOps peers instead of handing out tickets.
Conclusion
This may just be my opinion, but writing smaller services means that you’ll have better services and a more robust application system. Not only will this system be better performing (in terms of scalability), but it will be easier to maintain and debug as well as easier to evolve as business needs change.
We have to swallow a giant bitter pill of complexity to achieve that. However, we can leverage frameworks, build pipelines, containerization and container orchestration platforms to significantly reduce the cost of that complexity. This means that the software engineers role changes somewhat because they must become proficient in all of those tools.
We’re at 22 articles in (so far. I have more to write) and we’re done a bunch of stuff just to deploy what is basically a single page that reports from a database. It can seem like overkill. However, most of what we have done is foundation work that only needs to be done once and can be reused in new services or features.
What is the level of effort it would take to create a customer creation page? Add a form to the web application, with a post to the service we’ve already written?
If we updated the web application to do that, what would it take to be able to deploy that? I think it would be a push to master and then copy the new chart version number into our umbrella chart and hit helm upgrade
.
How difficult would it be to add the services needed adding and listing items? If we created them would we, as a sole delivery team member, be able to support all four (customer, customer management, item, item management) services (in terms of development, testing, building and deploying them)? I think so.
Finally, how difficult would it be to take this application from one kubernetes vendor to another? For example, if we’re hitting the limit on the free tier with our current vendor and found a better deal somewhere else. I think this would involve a change in our kubeconfig
file and a helm install
on the new cluster (in addition to some DNS work). I think a single person (or very small team) would be capable of doing that without needing a whole separate operations effort.
There’s more work to be done and I plan on continuing this series, but I thought this was a good place to take a step back, look at the trail we just walked and think about where we could end up.
0 comments on “Conclusion and Review”Add yours →