Create the Data Repository

Create the Data Repository

Spring Application Deployed with Kubernetes

Step by step building an application using Spring Boot and deployed via Docker on Kubernetes with Helm

full course
  1. Setup: IDE and New Project
  2. Create the Data Repository
  3. Building a Service Layer
  4. Create a REST Controller
  5. Logging, Tracing and Error Handling
  6. Documentation and Code Coverage
  7. Database as a Service
  8. Containerize the Service With Docker
  9. Docker Registry
  10. Automated Build Pipeline
  11. Helm for Deployment
  12. Setting up a Kubernetes Cluster
  13. Automating Deployment (for CICD)
  14. System Design
  15. Messaging and Event Driven Design
  16. Web UI with React
  17. Containerizing our UI
  18. UI Build Pipeline
  19. Put the UI in to Helm
  20. Creating an Ingress in Kubernetes
  21. Simplify Deployment
  22. Conclusion and Review

In the previous step we initialized a spring boot project so that we can start development. In this step we’re going to setup the persistence tier of the customer service.

Create the Data Repository

I like to start with the data model at the persistence level and work up. Since the datastore is usually the ‘foundation’ of your applications it makes sense to me. We’re going to use liquibase define a database ‘contract’, spring data jpa to create the data access layer and then H2 as an in-memory data store for testing.

First, lets setup liquibase. In src/main/resources create a directory structure for db/changelog/migrations. First lets use liquibase to define a DDL for our database table. We’re going to try and follow liquibase best practices. In src/main/resources/db/changelog/migrations create a file called 0001-init-customer-table.yaml. Inside that file lets define our customer table like this:

databaseChangeLog:
  - changeSet:
      id: create-customer-table
      author: brianrook
      changes:
        - createTable:
            catalogName: medium
            tableName: customer
            columns:
              - column:
                  name: customer_id
                  type: bigint
                  constraints:
                    primaryKey: true
                    nullable: false
              - column:
                  name: first_name
                  type: varchar(100)
                  constraints:
                    nullable: false
              - column:
                  name: last_name
                  type: varchar(100)
                  constraints:
                    nullable: false
              - column:
                  name: phone_number
                  type: varchar(20)
              - column:
                  name: email
                  type: varchar(150)
                  constraints:
                    nullable: false
              - column:
                  name: create_timestamp
                  type: timestamp
                  constraints:
                    nullable: false
                  defaultValueComputed: CURRENT_TIMESTAMP
              - column:
                  name: modify_timestamp
                  type: timestamp
                  constraints:
                    nullable: false
                  defaultValueComputed: CURRENT_TIMESTAMP
        - createSequence:
            sequenceName: customer_seq
            incrementBy: 1

What we’re doing here is defining the columns of our database table in a way that is compatible across DB platforms. Liquibase will read this configuration and generate the correct DB statements for your database. This ‘contract’ will work against H2, postgres, mysql, etc… without making any changes to your DDL. We’re also defining a sequence that we’ll use to manage the primary key on our table.

However, we also need to define how these changelogs will be executed. We can do that via a changelog master. In src/main/resources/db/changelog create a file called db.changelog-master.yaml with this content:

databaseChangeLog:
  - includeAll:
      path: db/changelog/migrations/

This will tell the application on startup to execute all of the changelogs in the defined directory. Since they’re ordered by the file name convention, they will execute in the correct order. Additionally, on startup the application will detect which changelogs are currently applied to the database and will apply any changes that it detects have not been applied.

Create the Entity

Now lets create the data access object that will interact with the data repository. Create this package in src/main/java com.brianrook.medium.customer.dao.entity

In the entity directory create your database entity CustomerEntity

package com.brianrook.medium.customer.dao.entity;

import lombok.Data;

import javax.persistence.*;

@Data
@Entity
@Table(name = "customer")
@Builder
@NoArgsConstructor
@AllArgsConstructor
public class CustomerEntity {
    @Id
    @SequenceGenerator(name = "customer_seq", sequenceName = "customer_seq", allocationSize = 1)
    @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "customer_seq")
    @Column(name = "customer_id")
    private Long customerId;

    @Column(name = "first_name")
    private String firstName;
    @Column(name = "last_name")
    private String lastName;
    @Column(name = "phone_number", nullable = false, unique = true)
    private String phoneNumber;
    @Column(name = "email", nullable = false, unique = true)
    private String email;
}

We’re making a lot of use of Lombok here. If you’re not familiar with it, it may be somewhat strange. Lombok uses the annotations to define code to generate at compile time. Here we’re using it to generate getters, setters, toString, etc… which are all defined here. I’m also using builder and some constructor annotations. It keeps us from having to write a lot of boiler plate java code and speeds up our development time in addition to making the code very easy to read which also speeds up our development and debug time.

Create the DAO

In the com.brianrook.medium.customer.dao directory create a java interface called CustomerDAO. Add this content:

package com.brianrook.medium.customer.dao;

import com.brianrook.medium.customer.dao.entity.CustomerEntity;
import org.springframework.data.repository.CrudRepository;
import org.springframework.stereotype.Repository;

@Repository
public interface CustomerDAO extends CrudRepository<CustomerEntity, Long> {
}

We’re using spring data jpa to generate the boilerplate CRUD functions so we don’t have to. We can also use this class to access the database directly, which we’ll use in testing

Write the Test

Lets see if we can actually write into a table and access the data using the DAO we just wrote. In src/test/java create a package called com.brianrook.medium.customer.dao. Create a java test class called CustomerDAOTest and add this content.

package com.brianrook.medium.customer.dao;

import com.brianrook.medium.customer.dao.entity.CustomerEntity;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.extension.ExtendWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.autoconfigure.orm.jpa.DataJpaTest;
import org.springframework.test.context.junit.jupiter.SpringExtension;

import java.util.Optional;

import static org.assertj.core.api.Assertions.assertThat;

@ExtendWith(SpringExtension.class)
@DataJpaTest
public class CustomerDAOTest {
    @Autowired
    private CustomerDAO customerDAO;

    @Test
    public void testSaveCustomer()
    {
        CustomerEntity testCustomer = CustomerEntity.builder()
                .firstName("Brian")
                .lastName("Rook")
                .email("[email protected]")
                .phoneNumber("303-555-1212")
                .build();

        customerDAO.save(testCustomer);

        Optional<CustomerEntity> returnCustomer = customerDAO.findById(testCustomer.getCustomerId());

assertThat(returnCustomer.isPresent()).isTrue();
assertThat(returnCustomer.get()).isEqualTo(testCustomer);
    }
}

If you run the test, you should bet a green check / success.

We can look at the logs and see what just happened

2020-03-27 16:25:38.530  INFO [,,,] 7636 --- [           main] beddedDataSourceBeanFactoryPostProcessor : Replacing 'dataSource' DataSource bean with embedded version
2020-03-27 16:25:38.863  INFO [,,,] 7636 --- [           main] o.s.j.d.e.EmbeddedDatabaseFactory        : Starting embedded database: url='jdbc:h2:mem:88a922a3-9a84-4153-8c73-085c38168e17;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=false', username='sa'

These lines are telling us that we’re going to start a datasource and hook it up to in memory H2 database that spring starts up for us.

2020-03-27 16:25:39.520  INFO [,,,] 7636 --- [           main] org.hibernate.dialect.Dialect            : HHH000400: Using dialect: org.hibernate.dialect.H2Dialect
Hibernate: drop table organizer if exists
Hibernate: drop sequence if exists customer_seq
Hibernate: create sequence customer_seq start with 1 increment by 1
Hibernate: create table organizer (customer_id bigint not null, email varchar(255) not null, first_name varchar(255), last_name varchar(255), phone_number varchar(255) not null, primary key (customer_id))
Hibernate: alter table organizer add constraint UK_s25r3nckqlux0klke4y3yb9u0 unique (email)
Hibernate: alter table organizer add constraint UK_81500y90b5jyslc733s8iisk unique (phone_number)

These lines are telling us that liquibase is using Hibernate to translate our liquibase changelog file into H2 dialect and then executing it.

2020-03-27 16:25:40.362  INFO [,,,] 7636 --- [           main] o.s.t.c.transaction.TransactionContext   : Began transaction (1) for test context ...
Hibernate: call next value for customer_seq
2020-03-27 16:25:40.574  INFO [,,,] 7636 --- [           main] o.s.t.c.transaction.TransactionContext   : Rolled back transaction for test: ...

These lines are telling us that the unit test created a transaction for the duration of the test and then rolled it back when it finished. So we don’t have to worry about stale test data contaminating our database during future test runs. This test is now idempotent and we can run it as much as we want and it will return the database to the original state.

Build and Commit

Confirm our build still works:

mvn clean install

and if we have success we should commit

git checkout -b dao
git add .
git commit -m "added database repository"
git push --set-upstream origin dao

I’m going to try and keep each step in its own branch. That way, if you’re following along you can see each stage. I hated articles that had stages but the git repo was the ‘final product’ and it was difficult to see how the code or service evolved. I’ll try and prevent that from happening here.

I’m also going to merge this branch back into master so that I can use it as a baseline for the next step.

git checkout master
git merge --squash dao
git commit -m "Added DB and DAO"
git push

0 comments on “Create the Data RepositoryAdd yours →

Leave a Reply

Your email address will not be published. Required fields are marked *