Skip to main content

· 3 min read
Byju Luckose

In this blog post, we'll walk through building a cloud-native Spring Boot application that runs on Amazon EKS (Elastic Kubernetes Service) and securely uploads files to an Amazon S3 bucket using IAM Roles for Service Accounts (IRSA). This allows your microservice to access AWS services like S3 without embedding credentials.

Why IRSA?

Traditionally, applications used access key/secret pairs for AWS SDKs. In Kubernetes, this is insecure and hard to manage. IRSA allows you to:

  • Grant fine-grained access to AWS resources
  • Avoid storing AWS credentials in your app
  • Rely on short-lived credentials provided by EKS

Overview

Here's the architecture we'll implement:

  1. Spring Boot app runs in EKS
  2. The app uses AWS SDK v2
  3. IRSA provides access to S3

Step 1: Create an IAM Policy for S3 Access

Create a policy named S3UploadPolicy with permissions for your bucket:


{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject"
],
"Resource": "arn:aws:s3:::your-bucket-name/*"
}
]
}

Step 2: Create an IAM Role for EKS

Use eksctl to create a service account and bind it to the IAM role:


eksctl create iamserviceaccount \
--name s3-uploader-sa \
--namespace default \
--cluster your-cluster-name \
--attach-policy-arn arn:aws:iam::<ACCOUNT_ID>:policy/S3UploadPolicy \
--approve \
--override-existing-serviceaccounts

Step 3: Spring Boot Setup

Add AWS SDK Dependency


<dependency>
<groupId>software.amazon.awssdk</groupId>
<artifactId>s3</artifactId>
</dependency>

Java Code to Upload to S3


import software.amazon.awssdk.auth.credentials.DefaultCredentialsProvider;
import software.amazon.awssdk.core.sync.RequestBody;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3.S3Client;
import software.amazon.awssdk.services.s3.model.PutObjectRequest;

import java.io.InputStream;

@Service
public class S3Uploader {

private final S3Client s3Client;

public S3Uploader() {
this.s3Client = S3Client.builder()
.region(Region.EU_CENTRAL_1)
.credentialsProvider(DefaultCredentialsProvider.create())
.build();
}

public void uploadFile(String bucketName, String key, InputStream inputStream, long contentLength) {
PutObjectRequest putRequest = PutObjectRequest.builder()
.bucket(bucketName)
.key(key)
.build();

s3Client.putObject(putRequest, RequestBody.fromInputStream(inputStream, contentLength));
}
}

DefaultCredentialsProvider automatically picks up credentials from the environment, including those provided by IRSA in EKS.

Step 4: Kubernetes Deployment

Define the Service Account (optional if created via eksctl):


apiVersion: v1
kind: ServiceAccount
metadata:
name: s3-uploader-sa
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::<ACCOUNT_ID>:role/<IRSA_ROLE_NAME>


apiVersion: apps/v1
kind: Deployment
metadata:
name: s3-upload-microservice
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: s3-upload-microservice
template:
metadata:
labels:
app: s3-upload-microservice
spec:
serviceAccountName: s3-uploader-sa
containers:
- name: app
image: 123456789012.dkr.ecr.eu-central-1.amazonaws.com/s3-uploader:latest
ports:
- containerPort: 8080

Final Thoughts

You now have a secure, cloud-native Spring Boot application that uploads to S3 using best practices with AWS and Kubernetes. IRSA removes the need for credentials in your code and aligns perfectly with GitOps, DevSecOps, and Zero Trust principles.

Let your microservices speak AWS securely — the cloud-native way!

· 4 min read
Byju Luckose

When deploying applications in an Amazon EKS (Elastic Kubernetes Service) environment, securing them with SSL/TLS is essential to protect sensitive data and ensure secure communication. One of the most popular and free methods to obtain TLS certificates is through Let’s Encrypt. This guide walks you through the process of setting up TLS certificates on an EKS cluster using Cert-Manager and NGINX Ingress Controller.

Prerequisites

Before starting, ensure you have the following:

  • An EKS Cluster set up with worker nodes.
  • kubectl configured to access your cluster.
  • A registered domain name pointing to the EKS load balancer.
  • NGINX Ingress Controller installed on the cluster.

Step 1: Install Cert-Manager

Cert-Manager automates the management of TLS certificates within Kubernetes.

Install Cert-Manager

Run the following command to apply the official Cert-Manager manifests:


kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.12.2/cert-manager.yaml

Verify the pods:


kubectl get pods --namespace cert-manager

You should see the following pods running:

  • cert-manager

  • cert-manager-cainjector

  • cert-manager-webhook

Step 2: Create a ClusterIssuer

A ClusterIssuer is a resource in Kubernetes that defines how Cert-Manager should obtain certificates. We’ll create one using Let’s Encrypt’s production endpoint.

ClusterIssuer YAML File:

Create a file named letsencrypt-cluster-issuer.yaml with the following content:

yaml

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: your.email@example.com # Change this to your email
privateKeySecretRef:
name: letsencrypt-prod-private-key
solvers:
- http01:
ingress:
class: nginx

Apply the YAML:


kubectl apply -f letsencrypt-cluster-issuer.yaml

Verify that the ClusterIssuer is created successfully:


kubectl get clusterissuer

Step 3: Create an Ingress Resource with TLS

The Ingress resource will route external traffic to services within the cluster and configure TLS.

Ingress YAML File:

Create a file named ingress.yaml with the following content:

yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-app-ingress
namespace: default
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
ingressClassName: nginx
rules:
- host: yourdomain.com # Replace with your domain
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-app-service
port:
number: 80
tls:
- hosts:
- yourdomain.com
secretName: my-app-tls

Apply the YAML:


kubectl apply -f ingress.yaml

Step 4: Verify the TLS Certificate

Check the status of the certificate request:


kubectl describe certificate my-app-tls

You should see a message indicating that the certificate was successfully issued. Cert-Manager will create a Kubernetes Secret named my-app-tls that contains the TLS certificate and key.

List the secrets to verify:


kubectl get secrets

You should see my-app-tls listed.

Step 5: Test the HTTPS Connection

Once the certificate is issued, test the connection:

  1. Open a browser and navigate to https://yourdomain.com.

  2. Verify that the connection is secure by checking for a valid TLS certificate.

Troubleshooting Tips:

  • Ensure the domain correctly resolves to the EKS load balancer.

  • Check for errors in the Cert-Manager logs using:


kubectl logs -n cert-manager -l app=cert-manager

Step 6: Renewing and Managing Certificates

Let’s Encrypt certificates are valid for 90 days. Cert-Manager automatically renews them before expiry.

To check if the renewal is working:


kubectl get certificates

Look for the renewal time and ensure it’s set before the expiration date.

Step 7: Clean Up (Optional)

If you want to remove the configurations:


kubectl delete -f ingress.yaml
kubectl delete -f letsencrypt-cluster-issuer.yaml
kubectl delete -f https://github.com/cert-manager/cert-manager/releases/download/v1.12.2/cert-manager.yaml

Conclusion

Congratulations! You have successfully secured your applications on an EKS cluster using Let’s Encrypt certificates. With the help of Cert-Manager, you can automate certificate issuance, management, and renewal, ensuring your applications always maintain secure communications. By following this guide, you have taken a significant step towards enhancing the security posture of your Kubernetes environment.

· 3 min read
Byju Luckose

In modern microservices architecture, service discovery and service mesh are two essential concepts that help manage the complexity of distributed systems. In this blog post, we will show you how to integrate Spring Boot Eureka Service Discovery with OCI (Oracle Cloud Infrastructure) Service Mesh to leverage the benefits of both systems.

What is Service Discovery and Service Mesh?

  • Service Discovery: This is a mechanism that allows services to dynamically register and discover each other. Spring Boot provides Eureka, a service discovery tool that helps minimize network latency and increase fault tolerance.
  • Service Mesh: A service mesh like OCI Service Mesh provides an infrastructure to manage the communication traffic between microservices. It offers features such as load balancing, service-to-service authentication, and monitoring.

Steps to Integration

1. Setup and Configuration of OCI Service Mesh

The first step is to create and configure OCI Service Mesh resources.

  • Create Mesh and Virtual Services: Log in to the OCI dashboard and create a new mesh resource. Define virtual services and virtual service routes that correspond to your microservices.
  • Deployment of Sidecar Proxies: OCI Service Mesh uses sidecar proxies that need to be deployed in your microservices pods.

2. Configuration of Spring Boot Eureka

Eureka Server Configuration

Create a Spring Boot application for the Eureka server. Configure application.yml as follows:

yaml

server:
port: 8761

eureka:
client:
register-with-eureka: false
fetch-registry: false
instance:
hostname: localhost
server:
enable-self-preservation: false

Eureka Client Configuration

Configure your Spring Boot microservices as Eureka clients. Add the following configuration to application.yml:

yaml

eureka:
client:
service-url:
defaultZone: http://localhost:8761/eureka/
instance:
prefer-ip-address: true

3. Integration of Both Systems

To integrate Spring Boot Eureka and OCI Service Mesh, there are two approaches:

  • Dual Registration: Register your services with both Eureka and OCI Service Mesh.
  • Bridge Solution: Create a bridge service that syncs information from Eureka to OCI Service Mesh.

Example Configuration

Creating Mesh Resources in OCI

  • Create Mesh and Virtual Services: Navigate to the OCI dashboard and create a new mesh resource. Define the necessary virtual services and routes.

Deployment with Sidecar Proxy

Update your Kubernetes deployment YAML files to add sidecar proxies. An example snippet might look like this:

yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: my-service
spec:
replicas: 1
selector:
matchLabels:
app: my-service
template:
metadata:
labels:
app: my-service
spec:
containers:
- name: my-service-container
image: my-service-image
ports:
- containerPort: 8080
- name: istio-proxy
image: istio/proxyv2
args:
- proxy
- sidecar
- --configPath
- /etc/istio/proxy
- --binaryPath
- /usr/local/bin/envoy
- --serviceCluster
- my-service
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: INSTANCE_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP

Conclusion

Integrating Spring Boot Eureka Service Discovery with OCI Service Mesh allows you to leverage the benefits of both systems: dynamic service registration and discovery from Eureka, as well as the advanced communication and security features of OCI Service Mesh. Through careful planning and configuration of both systems, you can create a robust and scalable microservices architecture.

With these steps, you are ready to integrate Spring Boot Eureka and OCI Service Mesh into your microservices architecture. Good luck with the implementation!

· 5 min read
Byju Luckose

The intersection of biotechnology and innovative design methodologies offers unparalleled opportunities to solve complex biological challenges. One such promising approach is Storytelling Collaborative Modeling, particularly when augmented by artificial intelligence (AI). This technique not only simplifies the conceptualization of sophisticated biotech processes like mRNA synthesis but also promotes a collaborative environment that bridges the gap between scientists, engineers, and other stakeholders.

The Power of Storytelling in Biotechnology:

Storytelling has the unique capability to demystify complex scientific concepts, making them accessible and relatable. In biotechnological applications, especially in areas like mRNA synthesis, storytelling can help depict the intricate process of how mRNA is synthesized, processed, and utilized in protein production within cells. This narrative approach helps non-specialists and stakeholders grasp the essential details without needing deep technical expertise.

Collaborative Modeling in Biotech Design:

In the realm of biotechnology, collaborative modeling involves multidisciplinary teams—including molecular biologists, bioinformatics specialists, and clinical researchers—coming together to build and refine models of biological processes. In the context of mRNA synthesis, these models might represent the transcription of DNA into mRNA, the translation of mRNA into proteins, or the therapeutic application of synthetic mRNA in vaccines.

Enhancing the Narrative with AI:

AI can dramatically enhance storytelling collaborative modeling by automating data analysis, generating predictive models, and simulating outcomes. For mRNA synthesis, AI tools can model how modifications in the mRNA sequence could impact protein structure and function, provide insights into mRNA stability, and predict immune responses in therapeutic applications, such as in mRNA vaccines.

Example: mRNA Synthesis in Vaccine Development:

Consider the development of an mRNA vaccine—a timely and pertinent application. The process starts with the design of an mRNA sequence that encodes for a viral protein. Storytelling can be used to narrate the journey of this mRNA from its synthesis to its delivery into human cells and subsequent protein production, which triggers an immune response.

AI enhances this narrative by simulating different scenarios, such as variations in the mRNA sequence or changes in the lipid nanoparticles used for delivery. These simulations help predict how these changes would affect the safety and efficacy of the vaccine, enabling more informed decision-making during the design phase.

Benefits of This Approach:

  • Enhanced Understanding: Complex biotechnological processes are explained in a simple, story-driven format that is easier for all stakeholders to understand.
  • Improved Collaboration: Facilitates a cooperative environment where diverse teams can contribute insights, leading to more innovative outcomes.
  • Faster Innovation: Accelerates the experimental phase with AI-driven predictions and simulations, reducing time-to-market for critical medical advancements.
  • Effective Communication: Helps communicate technical details to regulatory bodies, non-specialist stakeholders, and the public, enhancing transparency and trust.

Incorporating Output Models in Storytelling Collaborative Modeling:

A crucial component of leveraging AI in the narrative of mRNA synthesis is the creation and use of output models. These models serve as predictive tools that generate tangible outputs or predictions based on the input data and simulation parameters. By integrating these output models into the storytelling approach, teams can visualize and understand potential outcomes, making complex decisions more manageable.

Detailed Application in mRNA Vaccine Development:

To illustrate, let’s delve deeper into the mRNA vaccine development scenario:

Design Phase Output Models:

  • Sequence Optimization: AI models can predict how changes in the mRNA sequence affect the stability and efficacy of the resulting protein. For example, modifying nucleoside sequences to evade immune detection or enhance translational efficiency.
  • Simulation of Immune Response: Models simulate how the human immune system might react to the new protein produced by the vaccine mRNA. This helps in predicting efficacy and potential adverse reactions.

Manufacturing Phase Output Models:

  • Synthesis Efficiency: AI tools forecast the yield and purity of synthesized mRNA under various conditions, aiding in optimizing the production process.
  • Storage and Stability Predictions: Output models estimate how mRNA vaccines maintain stability under different storage conditions, crucial for distribution logistics.

Clinical Phase Output Models:

  • Patient Response Simulation: Before clinical trials, AI models can simulate patient responses based on genetic variability, helping to identify potential high-risk groups or efficacy rates across diverse populations.
  • Dosage Optimization: AI-driven models suggest optimal dosing regimens that maximize immune response while minimizing side effects.

Visualizing Outcomes with Enhanced Storytelling:

By incorporating these output models into the storytelling framework, biotechnologists can create a vivid, understandable narrative that follows the mRNA molecule from lab synthesis to patient immunization. This narrative includes visual aids like flowcharts, diagrams, and even animated simulations, making the information more accessible and engaging for all stakeholders.

Example Visualization:

Imagine an animated sequence showing the synthesis of mRNA, its encapsulation into lipid nanoparticles, its journey through the bloodstream, its uptake by a cell, and the subsequent production of the viral protein. Accompanying this, real-time data projections from AI models display potential success rates, immune response levels, and stability metrics. This powerful visual tool not only educates but also empowers decision-makers.

Conclusion:

In the high-stakes field of biotechnology, Storytelling Collaborative Modeling with AI is not merely a methodology—it's a revolutionary approach that can fundamentally alter how complex biological systems like mRNA synthesis are designed and understood. By leveraging the intuitive power of storytelling along with the analytical prowess of AI, biotech firms can navigate intricate scientific landscapes more effectively and foster breakthroughs that might otherwise remain out of reach. The integration of output models into Storytelling Collaborative Modeling transforms abstract scientific processes into tangible, actionable insights. In the world of biotechnology and specifically in the development of mRNA vaccines, this methodology is not just enhancing understanding—it's accelerating the pace of innovation and improving outcomes in vaccine development and beyond.

· 6 min read
Byju Luckose

In the evolving landscape of Spring Boot applications, managing configuration properties efficiently stands as a crucial aspect of development. The traditional approach has often leaned towards the @Value annotation for injecting configuration values. However, the @ConfigurationProperties annotation offers a robust alternative, enhancing type safety, grouping capability, and overall manageability of configuration properties. This blog delves into the advantages of adopting @ConfigurationProperties over @Value and guides on how to seamlessly integrate it into your Spring Boot applications.

Understanding @Value:

The @Value annotation in Spring Boot is straightforward and has been the go-to for many developers when it comes to injecting values from property files. It directly maps single values into fields, enabling quick and easy configuration.

java
@Component
public class ValueExample {
@Value("${example.property}")
private String property;
}

While @Value serves well for simple cases, its limitations become apparent as applications grow in complexity. It lacks type safety, does not support rich types like lists or maps directly, and can make refactoring a challenging task due to its string-based nature.

The Power of @ConfigurationProperties:

@ConfigurationProperties comes as a powerful alternative, offering numerous benefits that address the shortcomings of @Value. It enables binding of properties to structured objects, ensuring type safety and simplifying the management of grouped configuration data.

Benefits of @ConfigurationProperties:

  • Type Safety: By binding properties to POJOs, @ConfigurationProperties ensures compile-time checking, reducing the risk of type mismatches that can lead to runtime errors.

  • Grouping Configuration Properties: It allows for logical grouping of related properties into nested objects, enhancing readability and maintainability.

  • Rich Type Support: Unlike @Value, @ConfigurationProperties supports rich types out of the box, including lists, maps, and custom types, facilitating complex configuration setups.

  • Validation Support: Integration with JSR-303/JSR-380 validation annotations allows for validating configuration properties, ensuring that the application context fails fast in case of invalid configurations.

Implementing @ConfigurationProperties:

To leverage @ConfigurationProperties, define a class to bind your properties:

java
@ConfigurationProperties(prefix = "example")
public class ExampleProperties {
private String property;
// Getters and setters
}

Register your @ConfigurationProperties class as a bean and optionally enable validation:

java
@Configuration
@EnableConfigurationProperties(ExampleProperties.class)
public class ExampleConfig {
// Bean methods
}

Example properties.yml Configuration

Consider an application that requires configuration for an email service, including server details and default properties for sending emails. The properties.yml file could look something like this:

yaml
email:
host: smtp.example.com
port: 587
protocol: smtp
defaults:
from: no-reply@example.com
subjectPrefix: "[MyApp]"

This YAML file defines a structured configuration for an email service, including the host, port, protocol, and some default values for the "from" address and a subject prefix.

Mapping properties.yml to a Java Class with @ConfigurationProperties To utilize these configurations in a Spring Boot application, you would create a Java class annotated with @ConfigurationProperties that matches the structure of the YAML file:

java
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.context.annotation.Configuration;
import javax.validation.constraints.NotNull;

@Configuration
@ConfigurationProperties(prefix = "email")
public class EmailProperties {

@NotNull
private String host;
private int port;
private String protocol;
private Defaults defaults;

public static class Defaults {
private String from;
private String subjectPrefix;

// Getters and setters
}

// Getters and setters
}

In this example, the EmailProperties class is annotated with @ConfigurationProperties with the prefix "email", which corresponds to the top-level key in the properties.yml file. This class includes fields for host, port, protocol, and a nested Defaults class, which matches the nested structure under the email.defaults key in the YAML file.

Registering the Configuration Properties Class To enable the use of @ConfigurationProperties, ensure the class is recognized as a bean within the Spring context. This can typically be achieved by annotating the class with @Configuration, @Component, or using the @EnableConfigurationProperties annotation on one of your configuration classes, as shown in the previous example.

@ConfigurationProperties with a RestController

Integrating @ConfigurationProperties with a RestController in Spring Boot involves a few straightforward steps. This allows your application to dynamically adapt its behavior based on externalized configuration. Here's a comprehensive example demonstrating how to use @ConfigurationProperties within a RestController to manage application settings for a greeting service.

Step 1: Define the Configuration Properties

First, define a configuration properties class that corresponds to the properties you wish to externalize. In this example, we will create a simple greeting application that can be configured with different messages.

GreetingProperties.java

java
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.validation.annotation.Validated;

import javax.validation.constraints.NotBlank;

@ConfigurationProperties(prefix = "greeting")
@Validated
public class GreetingProperties {

@NotBlank
private String message = "Hello, World!"; // default message

// Getters and setters
public String getMessage() {
return message;
}

public void setMessage(String message) {
this.message = message;
}
}

Step 2: Create a Configuration Class to Enable @ConfigurationProperties

GreetingConfig.java

java
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.context.annotation.Configuration;

@Configuration
@EnableConfigurationProperties(GreetingProperties.class)
public class GreetingConfig {
// This class enables the binding of properties to the GreetingProperties class
}

Step 3: Define the RestController Using the Configuration Properties

Now, let's use the GreetingProperties in a RestController to output a configurable greeting message.

GreetingController.java

java
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class GreetingController {

private final GreetingProperties properties;

// Inject the GreetingProperties bean through constructor injection
public GreetingController(GreetingProperties properties) {
this.properties = properties;
}

@GetMapping("/greeting")
public String greeting() {
// Use the message from the properties
return properties.getMessage();
}
}

Step 4: Add Configuration in application.properties or application.yml

Finally, define the configuration in your application.yml (or application.properties) file to customize the greeting message.

application.yml

yaml
greeting:
message: "Welcome to Spring Boot!"

How It Works

  • The GreetingProperties class defines a field for a greeting message, which is configurable through the application's configuration files (application.yml or application.properties).
  • The GreetingConfig class uses @EnableConfigurationProperties to enable the binding of externalized values to the GreetingProperties class.
  • The GreetingController injects GreetingProperties to use the configurable message in its endpoint.

When you start the application and navigate to /greeting, the application will display the greeting message defined in your application.yml, showcasing how @ConfigurationProperties can be effectively used with a RestController to configure behavior dynamically. This approach enhances maintainability, type safety, and decouples the configuration from the business logic, making your application more flexible and configurable.

Comparing @Value and @ConfigurationProperties:

While @Value is suitable for injecting standalone values, @ConfigurationProperties shines in scenarios requiring structured configuration data management. It not only improves type safety and configuration organization but also simplifies handling of dynamic properties through externalized configuration.

Conclusion:

Transitioning from @Value to @ConfigurationProperties in Spring Boot applications marks a step towards more robust and maintainable configuration management. By embracing @ConfigurationProperties, developers can enjoy a wide range of benefits from type safety and rich type support to easy validation and better organization of configuration properties. As you design and evolve your Spring Boot applications, consider leveraging @ConfigurationProperties to streamline your configuration management process.

In closing, while @Value has its place for straightforward, one-off injections, @ConfigurationProperties offers a comprehensive solution for managing complex and grouped configuration data, making it an essential tool in the Spring Boot developer's arsenal.

· 4 min read
Byju Luckose

In the rapidly evolving landscape of software development, microservices have emerged as a preferred architectural style for building scalable and flexible applications. As developers navigate this landscape, tools like Spring Boot and Docker Compose have become essential in streamlining development workflows and enhancing service networking. This blog explores how Spring Boot, when combined with Docker Compose, can simplify the development and deployment of microservice architectures.

The Power of Spring Boot in Microservice Architecture

Spring Boot, a project within the larger Spring ecosystem, offers developers an opinionated framework for building stand-alone, production-grade Spring-based applications with minimal fuss. Its auto-configuration feature, along with an extensive suite of starters, makes it an ideal choice for microservice development, where the focus is on developing business logic rather than boilerplate code.

Microservices built with Spring Boot are self-contained and loosely coupled, allowing for independent development, deployment, and scaling. This architectural style promotes resilience and flexibility, essential qualities in today's fast-paced development environments.

Docker Compose: A Symphony of Containers

Enter Docker Compose, a tool that simplifies the deployment of multi-container Docker applications. With Docker Compose, you can define and run multi-container Docker applications using a simple YAML file. This is particularly beneficial in a microservices architecture, where each service runs in its own container environment.

Docker Compose ensures consistency across environments, reducing the "it works on my machine" syndrome. By specifying service dependencies, environment variables, and build parameters in the Docker Compose file, developers can ensure that microservices interact seamlessly, both in development and production environments.

Integrating Spring Boot with Docker Compose

The integration of Spring Boot and Docker Compose in microservice development brings about a streamlined workflow that enhances productivity and reduces time to market. Here's how they work together:

  • Service Isolation: Each Spring Boot microservice is developed and deployed as a separate entity within its Docker container, ensuring isolation and minimizing conflicts between services.

  • Service Networking: Docker Compose facilitates easy networking between containers, allowing Spring Boot microservices to communicate with each other through well-defined network aliases.

  • Environment Standardization: Docker Compose files define the runtime environment of your microservices, ensuring that they run consistently across development, testing, and production.

  • Simplified Deployment: With Docker Compose, you can deploy your entire stack with a single command, docker-compose up, significantly simplifying the deployment process.

A Practical Example

Let's consider a simple example where we have two Spring Boot microservices: Service A and Service B, where Service A calls Service B. We use Docker Compose to define and run these services.

Step 1: Create Spring Boot Microservices

First, develop your microservices using Spring Boot. Each microservice should be a standalone application, focusing on a specific business capability.

Step 2: Dockerize Your Services

Create a Dockerfile for each microservice to specify how they should be built and packaged into Docker images.

Step 3: Define Your Docker Compose File

Create a docker-compose.yml file at the root of your project. Define services, network settings, and dependencies corresponding to each Spring Boot microservice.

yaml
version: '3'
services:
serviceA:
build: ./serviceA
ports:
- "8080:8080"
networks:
- service-network

serviceB:
build: ./serviceB
ports:
- "8081:8081"
networks:
- service-network

networks:
service-network:
driver: bridge

Step 4: Run Your Services

With Docker Compose, you can launch your entire microservice stack using:

bash
docker-compose up --build

This command builds the images for your services (if they're not already built) and starts them up, ensuring they're properly networked together.

Conclusion

Integrating Spring Boot and Docker Compose in microservice architecture not only simplifies development and deployment but also ensures a level of standardization and isolation critical for modern applications. This synergy allows developers to focus more on solving business problems and less on the underlying infrastructure challenges, leading to faster development cycles and more robust, scalable applications.

· 14 min read
Byju Luckose

In the rapidly evolving landscape of software development, microservices have emerged as a cornerstone of modern application architecture. By decomposing applications into smaller, loosely coupled services, organizations can enhance scalability, flexibility, and deployment speeds. However, the distributed nature of microservices introduces its own set of challenges, including service discovery, configuration management, and fault tolerance. To navigate these complexities, developers and architects leverage a set of distributed system patterns specifically tailored for microservices. This blog explores these patterns, offering insights into their roles and benefits in building resilient, scalable, and manageable microservices architectures.

1. API Gateway Pattern: The Front Door to Microservices

The API Gateway pattern serves as the unified entry point for all client requests to the microservices ecosystem. It abstracts the underlying complexity of the microservices architecture, providing clients with a single endpoint to interact with. This pattern is pivotal in handling cross-cutting concerns such as authentication, authorization, logging, and SSL termination. It routes requests to the appropriate microservice, thereby simplifying the client-side code and enhancing the security and manageability of the application.

Example:

This example demonstrates setting up a basic API Gateway that routes requests to two microservices: user-service and product-service. For simplicity, the services will be stubbed out with basic Spring Boot applications that return dummy responses.

Step 1: Create the API Gateway Service

  • Setup: Initialize a new Spring Boot project named api-gateway using Spring Initializr. Select Gradle/Maven as the build tool, add Spring Web, and Spring Cloud Gateway as dependencies.

  • Configure the Gateway Routes: In the application.yml or application.properties file of your api-gateway project, define routes to the user-service and product-service. Assuming these services run locally on ports 8081 and 8082 respectively, your configuration might look like this:

yaml
spring:
cloud:
gateway:
routes:
- id: user-service
uri: http://localhost:8081
predicates:
- Path=/users/**
- id: product-service
uri: http://localhost:8082
predicates:
- Path=/products/**
  • Run the Application: Start the api-gateway application. Spring Cloud Gateway will now route requests to /users/** to the user-service and /products/** to the product-service.

Step 2: Stubbing Out the Microservices

For user-service and product-service, you'll create two simple Spring Boot applications. Here's how you can stub them out:

  • Create Spring Boot Projects: Use Spring Initializr to create two projects, user-service and product-service, with Spring Web dependency.

  • Implement Basic Controllers: For each service, implement a basic REST controller that defines endpoints to return dummy data.

User Service

java
@RestController
@RequestMapping("/users")
public class UserController {

@GetMapping
public ResponseEntity<String> listUsers() {
return ResponseEntity.ok("Listing all users");
}
}

Product Service

java
@RestController
@RequestMapping("/products")
public class ProductController {

@GetMapping
public ResponseEntity<String> listProducts() {
return ResponseEntity.ok("Listing all products");
}
}

  • Configure and Run the Services: Ensure user-service runs on port 8081 and product-service on port 8082. You can specify the server port in each project's application.properties file: For user-service:
properties
server.port=8081

For product-service:

properties
server.port=8082

Run both applications.

Testing the Setup

With api-gateway, user-service, and product-service running, you can test the API Gateway pattern:

  • Accessing http://localhost:<gateway-port>/users should route the request to the user-service and return "Listing all users".
  • Accessing http://localhost:<gateway-port>/products should route the request to the product-service and return "Listing all products".

Replace <gateway-port\> with the actual port your api-gateway application is running on, usually 8080 if not configured otherwise.

This example illustrates the API Gateway pattern's fundamentals, providing a central point for routing requests to various microservices based on paths. For production scenarios, consider adding security, logging, and resilience features to your gateway.

2. Service Discovery: Dynamic Connectivity in a Microservice World

Microservices often need to communicate with each other, but in a dynamic environment where services can move, scale, or fail, hard-coding service locations becomes impractical. The Service Discovery pattern enables services to dynamically discover and communicate with each other. It can be implemented via client-side discovery, where services query a registry to find their peers, or server-side discovery, where a router or load balancer queries the registry and directs the request to the appropriate service.

Example:

Implementing Service Discovery in a microservices architecture enables services to dynamically discover and communicate with each other. This is essential for building scalable and flexible systems. Spring Cloud Netflix Eureka is a popular choice for Service Discovery within the Spring ecosystem. In this example, we'll set up Eureka Server for service registration and discovery, and then create two simple microservices (client-service and server-service) that register themselves with Eureka and demonstrate how client-service discovers and calls server-service.

Step 1: Setup Eureka Server

  • Initialize a Spring Boot Project: Use Spring Initializr to create a new project named eureka-server. Choose Spring Boot version (make sure it's compatible with Spring Cloud), add Spring Web, and Eureka Server dependencies.

  • Enable Eureka Server: In the main application class, use @EnableEurekaServer annotation.

java
@SpringBootApplication
@EnableEurekaServer
public class EurekaServerApplication {
public static void main(String[] args) {
SpringApplication.run(EurekaServerApplication.class, args);
}
}
  • Configure Eureka Server: In application.properties or application.yml, set the application port and disable registration with Eureka since the server doesn't need to register with itself.
properties
server.port=8761
eureka.client.register-with-eureka=false
eureka.client.fetch-registry=false
  • Run Eureka Server: Start your Eureka Server application. It will run on port 8761 and provide a dashboard accessible at http://localhost:8761.

Step 2: Create Microservices

Now, create two microservices, client-service and server-service, that register themselves with the Eureka Server.

Server Service

  • Setup: Initialize a new Spring Boot project with Spring Web and Eureka Discovery Client dependencies.

  • Enable Eureka Client: Use @EnableDiscoveryClient or @EnableEurekaClient annotation in the main application class.

java
@SpringBootApplication
@EnableDiscoveryClient
public class ServerServiceApplication {
public static void main(String[] args) {
SpringApplication.run(ServerServiceApplication.class, args);
}
}
  • Configure and Register with Eureka: In application.properties, set the port and application name, and configure the Eureka server location.
properties
@RestController
server.port=8082
spring.application.name=server-service
eureka.client.service-url.defaultZone=http://localhost:8761/eureka/
  • Implement a Simple REST Controller: Create a controller with a simple endpoint to simulate a service.
java
@RestController
public class ServerController {

@GetMapping("/greet")
public String greet() {
return "Hello from Server Service";
}
}

Client Service Repeat the steps for creating a microservice for client-service, with a slight modification in step 4 to discover and call server-service.

  • Implement a REST Controller to Use RestTemplate and DiscoveryClient:
java
@RestController
public class ClientController {

@Autowired
private RestTemplate restTemplate;

@Autowired
private DiscoveryClient discoveryClient;

@Bean
public RestTemplate restTemplate() {
return new RestTemplate();
}

@GetMapping("/call-server")
public String callServer() {
List<ServiceInstance> instances = discoveryClient.getInstances("server-service");
if (instances.isEmpty()) return "No instances found";
String serviceUri = String.format("%s/greet", instances.get(0).getUri().toString());
return restTemplate.getForObject(serviceUri, String.class);
}
}

Testing Service Discovery

  • Start Eureka Server: Ensure it's running and accessible.

  • Start Both Microservices: client-service and server-service should register themselves with Eureka and be visible on the Eureka dashboard.

  • Call the Client Service: Access http://localhost:<client-service-port>/call-server. This should internally call the server-service through service discovery and return "Hello from Server Service".

Replace <client-service-port> with the actual port where client-service is running, typically 8080 if you haven't specified otherwise.

This example illustrates the basic setup of Service Discovery in a microservices architecture using Spring Cloud Netflix Eureka. By dynamically discovering services, this approach significantly simplifies the communication and scalability of microservices-based applications.

3. Circuit Breaker: Preventing Failure Cascades

The Circuit Breaker pattern is a crucial fault tolerance mechanism that prevents a network or service failure from cascading through the system. When a microservice call fails repeatedly, the circuit breaker "trips," and further calls to the service are halted or redirected, allowing the failing service time to recover. This pattern ensures system stability and resilience, protecting the system from a domino effect of failures.

Example:

Implementing a Circuit Breaker pattern in a microservices architecture helps to prevent failure cascades, allowing a system to continue operating smoothly even when one or more services fail. In the Spring ecosystem, Resilience4J is a popular choice for implementing the Circuit Breaker pattern, thanks to its lightweight, modular, and flexible design. Here's how you can integrate a circuit breaker into a microservice calling another service, using Spring Boot with Resilience4J.

Step 1: Add Dependencies

For the client service that calls another service (let's continue with the client-service example), you need to add Resilience4J and Spring Boot AOP dependencies to your pom.xml.

xml
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-aop</artifactId>
</dependency>
<dependency>
<groupId>io.github.resilience4j</groupId>
<artifactId>resilience4j-spring-boot2</artifactId>
<version>{resilience4j.version}</version>
</dependency>

Replace {resilience4j.version} with the latest version of Resilience4J compatible with your Spring Boot version.

Step 2: Configure the Circuit Breaker

Resilience4J allows you to configure circuit breakers in application.yml or application.properties. You can define parameters like failure rate threshold, wait duration, and ring buffer size.

application.yml configuration:

yaml
resilience4j.circuitbreaker:
instances:
callServerCircuitBreaker:
registerHealthIndicator: true
slidingWindowSize: 10
minimumNumberOfCalls: 5
permittedNumberOfCallsInHalfOpenState: 3
automaticTransitionFromOpenToHalfOpenEnabled: true
waitDurationInOpenState: 10s
failureRateThreshold: 50
eventConsumerBufferSize: 10

This configuration sets up a circuit breaker for calling the server service, with a 50% failure rate threshold and a 10-second wait duration in the open state before it transitions to half-open for testing if the failures have been resolved.

Step 3: Implement Circuit Breaker with Resilience4J

In your client-service, use the @CircuitBreaker annotation on the method that calls the server-service. This annotation tells Resilience4J to monitor this method for failures and open/close the circuit according to the defined rules.

java
@RestController
public class ClientController {

@Autowired
private RestTemplate restTemplate;

@Autowired
private DiscoveryClient discoveryClient;

@Bean
public RestTemplate restTemplate() {
return new RestTemplate();
}

@CircuitBreaker(name = "callServerCircuitBreaker", fallbackMethod = "fallback")
@GetMapping("/call-server")
public String callServer() {
List<ServiceInstance> instances = discoveryClient.getInstances("server-service");
if (instances.isEmpty()) return "No instances found";
String serviceUri = String.format("%s/greet", instances.get(0).getUri().toString());
return restTemplate.getForObject(serviceUri, String.class);
}

public String fallback(Throwable t) {
return "Fallback Response: Server Service is currently unavailable.";
}
}

The fallback method is invoked when the circuit breaker is open, providing an alternative response to avoid cascading failures.

Step 4: Test the Circuit Breaker

  • Start Both Microservices: Make sure both client-service and server-service are running. Ensure server-service is registered with Eureka and discoverable by client-service.

  • Simulate Failures: You can simulate failures by stopping server-service or introducing a method in server-service that always throws an exception.

  • Observe the Circuit Breaker in Action: Call the client-service endpoint repeatedly. Initially, it should successfully call server-service. After reaching the failure threshold, the circuit breaker should open, and subsequent calls should immediately return the fallback response without attempting to call server-service.

  • Recovery: After the wait duration, the circuit breaker will allow a limited number of test requests through. If these succeed, the circuit breaker will close again, and client-service will resume calling server-service normally.

This example demonstrates the basic usage of Resilience4J's Circuit Breaker in a microservices architecture, providing an effective means of preventing failure cascades and enhancing system resilience.

4. Config Server: Centralized Configuration Management

Microservices architectures often face challenges in managing configurations across services, especially when they span multiple environments. The Config Server pattern addresses this by centralizing external configurations. Services fetch their configuration from a central source at runtime, simplifying configuration management and ensuring consistency across environments.

Example:

Creating a centralized configuration management system using Spring Cloud Config Server allows microservices to fetch their configurations from a central location, simplifying the management of application settings and ensuring consistency across environments. This example will guide you through setting up a Config Server and demonstrating how a client microservice can retrieve its configuration.

Step 1: Setup Config Server

  • Initialize a Spring Boot Project: Use Spring Initializr to create a new project named config-server. Choose the necessary Spring Boot version, and add Config Server as a dependency.

  • Enable Config Server: In your main application class, use the @EnableConfigServer annotation.

java
@SpringBootApplication
@EnableConfigServer
public class ConfigServerApplication {
public static void main(String[] args) {
SpringApplication.run(ConfigServerApplication.class, args);
}
}
  • Configure the Config Server: Define the location of your configuration repository (e.g., a Git repository) in application.properties or application.yml. For simplicity, you can start with a local Git repository or even a file system-based repository.
properties
server.port=8888
spring.cloud.config.server.git.uri=file://$\{user.home\}/config-repo

This example uses a local Git repository located at ${user.home}/config-repo. You'll need to create this repository and add configuration files for your client services.

  • Start the Config Server: Run your application. The Config Server will start on port 8888 and serve configurations from the specified repository.

Step 2: Prepare Configuration Repository

  • Create a Git Repository: At the location specified in your Config Server (${user.home}/config-repo), initialize a new Git repository and add configuration files for your services.

  • Add Configuration Files: Create application property files named after your client services. For example, if you have a service named client-service, create a file named client-service.properties or client-service.yml with the necessary configurations.

  • Commit Changes: Commit and push your configuration files to the repository.

Step 3: Setup Client Service to Use Config Server

  • Initialize a Spring Boot Project: Create a new project for your client service, adding dependencies for Spring Web, Config Client, and any others you require.

  • Bootstrap Configuration: In src/main/resources, create a bootstrap.properties or bootstrap.yml file (this file is loaded before application.properties), specifying the application name and Config Server location.

properties
spring.application.name=client-service
spring.cloud.config.uri=http://localhost:8888
  • Access Configuration Properties: Use @Value annotations or @ConfigurationProperties in your client service to inject configuration properties.
java
@RestController
public class ClientController {

@Value("${example.property}")
private String exampleProperty;

@GetMapping("/show-config")
public String showConfig() {
return "Configured Property: " + exampleProperty;
}
}

Step 4: Testing

  • Start the Config Server: Ensure it's running and accessible at http://localhost:8888.

  • Start Your Client Service: Run the client service application. It should fetch its configuration from the Config Server during startup.

  • Verify Configuration Retrieval: Access the client service's endpoint (e.g., http://localhost:<client-port>/show-config). It should display the value of example.property fetched from the Config Server.

This example demonstrates setting up a basic Spring Cloud Config Server and a client service retrieving configuration properties from it. This setup enables centralized configuration management, making it easier to maintain and update configurations across multiple services and environments.

5. Bulkhead: Isolating Failures

Inspired by the watertight compartments (bulkheads) in a ship, the Bulkhead pattern isolates elements of an application into pools. If one service or resource pool fails, the others remain unaffected, ensuring the overall system remains operational. This pattern enhances system resilience by preventing a single failure from bringing down the entire application.

6. Sidecar: Enhancing Services with Auxiliary Functionality

The Sidecar pattern involves deploying an additional service (the sidecar) alongside each microservice. This sidecar handles orthogonal concerns such as monitoring, logging, security, and network traffic control, allowing the main service to focus on its core functionality. This pattern promotes operational efficiency and simplifies the development of microservices by abstracting common functionalities into a separate entity.

7. Backends for Frontends: Tailored APIs for Diverse Clients

Different frontend applications (web, mobile, etc.) often require different backends to efficiently meet their specific requirements. The Backends for Frontends (BFF) pattern addresses this by providing dedicated backend services for each type of frontend. This approach optimizes the backend to frontend communication, enhancing performance and user experience.

8. Saga: Managing Transactions Across Microservices

In distributed systems, maintaining data consistency across microservices without relying on traditional two-phase commit transactions is challenging. The Saga pattern offers a solution by breaking down transactions into a series of local transactions. Each service performs its local transaction and publishes an event; subsequent services listen to these events and perform their transactions accordingly, ensuring overall data consistency.

9. Event Sourcing: Immutable Event Logs

The Event Sourcing pattern captures changes to an application's state as a sequence of events. This approach not only facilitates auditing and debugging by providing a historical record of all state changes but also simplifies communication between microservices. By publishing state changes as events, services can react to these changes asynchronously, enhancing decoupling and scalability.

10. CQRS: Separation of Concerns for Performance and Scalability

Command Query Responsibility Segregation (CQRS) pattern separates the read (query) and write (command) operations of an application into distinct models. This separation allows optimization of each operation, potentially improving performance, scalability, and security. CQRS is particularly beneficial in systems where the complexity and performance requirements for read and write operations differ significantly.

Conclusion

The distributed system patterns discussed in this blog form the backbone of effective microservices architectures. By leveraging these patterns, developers can build systems that are not only scalable and flexible but also resilient and manageable. However, it's crucial to understand that each pattern comes with its trade-offs and should be applied based on the specific requirements and context of the application. As the world of software continues to evolve, so too will the patterns and practices that underpin the successful implementation of microservices, guiding developers through the complexities of distributed systems architecture.

· 3 min read
Byju Luckose

The cloud-native landscape is rapidly evolving, driven by a commitment to innovation, security, and the open-source ethos. Recent events such as KubeCon and SUSECON 2023 have showcased significant advancements and trends that are shaping the future of cloud-native technologies. Here, we delve into the highlights and insights from these conferences, providing a glimpse into the future of cloud-native computing.

Open Standards in Observability Take Center Stage

Observability has emerged as a critical aspect of cloud-native architectures, enabling organizations to monitor, debug, and optimize their applications and systems efficiently. KubeCon highlighted the rise of open standards in observability, demonstrating a collective industry effort towards compatibility, collaboration, and convergence. Notable developments include:

  • The formation of a new CNCF working group, led by eBay and Netflix, focusing on standardizing query languages for observability.
  • Efforts to standardize the Prometheus Remote-Write Protocol, enhancing interoperability across metrics and time-series data.
  • The transition from OpenCensus to OpenTelemetry, marking a significant step forward in unified observability frameworks under the CNCF.

These initiatives underscore the industry's move towards open specifications and standards, ensuring that the tools and platforms within the cloud-native ecosystem can work together seamlessly.

The Evolution of Cloud-Native Architectures

Cloud-native computing represents a transformative approach to software development, characterized by the use of containers, microservices, immutable infrastructure, and declarative APIs. This paradigm shift focuses on maximizing development flexibility and agility, enabling teams to create applications without the traditional constraints of server dependencies.

The transition to cloud-native technologies has been driven by the need for more agile, scalable, and reliable software solutions, particularly in dynamic cloud environments. As a result, organizations are increasingly adopting cloud-native architectures to benefit from increased development speed, enhanced scalability, improved reliability, and cost efficiency.

SUSECON 2023: Reimagining Cloud-Native Security and Innovation

SUSECON 2023 shed light on how SUSE is addressing organizational challenges in the cloud-native world. The conference showcased SUSE's efforts to innovate and expand its footprint in the cloud-native ecosystem, emphasizing flexibility, agility, and the importance of open-source solutions.

Highlights from SUSECON 2023 include:

  • Advancements in SUSE Linux Enterprise (SLE) and security-focused updates to Rancher, offering customers highly configurable solutions without vendor lock-in.
  • The introduction of cloud-native AI-based observability tools, providing smarter insights and full visibility across workloads.
  • Emphasis on modernization, with cloud-native infrastructure solutions that allow organizations to design modern approaches and manage virtual machines and containers across various deployments.

SUSE's focus on cloud-native technologies promises to provide organizations with the tools they need to thrive in a rapidly changing digital landscape, addressing the IT skill gap challenges and simplifying the path towards modernization.

Looking Ahead: The Future of Cloud-Native Technologies

The insights from KubeCon and SUSECON 2023 highlight the continuous evolution and growing importance of cloud-native technologies. As the industry moves towards open standards and embraces innovative solutions, organizations are well-positioned to navigate the complexities of modern software development and deployment.

The future of cloud-native computing is bright, with ongoing efforts to enhance observability, improve security, and foster an open-source community driving the technology forward. As we look ahead, it's clear that the principles of flexibility, scalability, and resilience will continue to guide the development of cloud-native architectures, ensuring they remain at the forefront of digital transformation.

The cloud-native journey is one of constant learning and adaptation. By staying informed about the latest trends and advancements, organizations can leverage these powerful technologies to achieve their strategic goals and thrive in the digital era.

· 3 min read
Byju Luckose

In modern applications, permanently deleting records is often undesirable. Instead, developers prefer an approach that allows records to be marked as deleted without actually removing them from the database. This approach is known as "Soft Delete." In this blog post, we'll explore how to implement Soft Delete in a Spring Boot application using JPA for data persistence.

What is Soft Delete?

Soft Delete is a pattern where records in the database are not physically deleted but are instead marked as deleted. This is typically achieved by a deletedAt field in the database table. If this field is null, the record is considered active. If it's set to a timestamp, however, the record is considered deleted.

Benefits of Soft Delete

  • Data Recovery: Deleted records can be easily restored.
  • Preserve Integrity: Relationships with other tables remain intact, protecting data integrity.
  • Audit Trail: The deletedAt field provides a built-in audit trail for the deletion of records.

Implementation in Spring Boot with JPA

Step 1: Creating the Base Entity

Let's start by creating a base entity that includes common attributes like createdAt, updatedAt, and deletedAt. This class will be inherited by all entities that should support Soft Delete.

java
import javax.persistence.MappedSuperclass;
import javax.persistence.PrePersist;
import javax.persistence.PreUpdate;
import java.time.LocalDateTime;

@MappedSuperclass
public abstract class Auditable {

private LocalDateTime createdAt;
private LocalDateTime updatedAt;
private LocalDateTime deletedAt;

@PrePersist
public void prePersist() {
createdAt = LocalDateTime.now();
}

@PreUpdate
public void preUpdate() {
updatedAt = LocalDateTime.now();
}

// Getters and Setters...
}

Step 2: Define an Entity with Soft Delete

Now, let's define an entity that inherits from Auditable to leverage the Soft Delete behavior.

java
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;

@Entity
public class BlogPost extends Auditable {

@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;

private String title;

// Getters and Setters...
}

Step 3: Customize the Repository

The repository needs to be customized to query only non-deleted records and allow for Soft Delete.

java
import org.springframework.data.jpa.repository.JpaRepository;
import org.springframework.data.jpa.repository.Modifying;
import org.springframework.data.jpa.repository.Query;
import org.springframework.transaction.annotation.Transactional;

public interface BlogPostRepository extends JpaRepository<BlogPost, Long> {

@Query("select b from BlogPost b where b.deletedAt is null")
List<BlogPost> findAllActive();

@Transactional
@Modifying
@Query("update BlogPost b set b.deletedAt = CURRENT_TIMESTAMP where b.id = :id")
void softDelete(Long id);
}

Step 4: Using Soft Delete in the Service

In your service, you can now use the softDelete method to softly delete records instead of completely removing them.

java
@Service
public class BlogPostService {

private final BlogPostRepository repository;

public BlogPostService(BlogPostRepository repository) {
this.repository = repository;
}

public void deleteBlogPost(Long id) {
repository.softDelete(id);
}

// Other methods...
}

Conclusion

Soft Delete in JPA and Spring Boot offers a flexible and reliable method to preserve data integrity, enhance the audit trail, and facilitate data recovery. By using a base entity class and customizing the repository, you can easily integrate Soft Delete into your application.

· 4 min read
Byju Luckose

In the dynamic world of microservices architecture, Spring Cloud emerges as a powerhouse framework that simplifies the development and deployment of cloud-native, distributed systems. It offers a suite of tools to address common patterns in distributed systems, such as configuration management, service discovery, circuit breakers, and routing. This blog post dives into the core components of Spring Cloud, showcasing how it facilitates building resilient, scalable microservice applications.

Introduction to Spring Cloud

Spring Cloud is built on top of Spring Boot, providing developers with a coherent and flexible toolkit for building common patterns in distributed systems. It leverages and simplifies the use of technologies such as Netflix OSS, Consul, and Kubernetes, allowing developers to focus on their business logic rather than the complexity of cloud-based deployment and operation.

Key Features of Spring Cloud

  • Service Discovery: Tools like Netflix Eureka or Consul for automatic detection of network locations.
  • Configuration Management: Centralized configuration using Spring Cloud Config Server for managing application settings across all environments.
  • Routing and Filtering: Intelligent routing with Zuul or Spring Cloud Gateway, enabling dynamic route mapping and filtering.
  • Circuit Breakers: Resilience patterns with Hystrix, Resilience4j, or Spring Retry for handling service outages gracefully.
  • Distributed Tracing: Spring Cloud Sleuth and Zipkin for tracing requests across microservices, essential for debugging and monitoring.

Building Blocks of Spring Cloud

Let's delve into some of the critical components of Spring Cloud, illustrating how they bolster the development of microservice architectures.

Service Discovery: Eureka

Service discovery is crucial in microservices architectures, where services need to locate and communicate with each other. Eureka, Netflix's service discovery tool, is seamlessly integrated into Spring Cloud. Services register with Eureka Server upon startup and then discover each other through it, abstracting away the complexity of DNS configurations and IP addresses.

Configuration Management: Spring Cloud Config

Spring Cloud Config provides support for externalized configuration in a distributed system. With the Config Server, you have a central place to manage external properties for applications across all environments. The server stores configuration files in a Git repository, simplifying version control and changes. Clients fetch their configuration from the server on startup, ensuring consistency and ease of management.

Circuit Breaker: Hystrix

In a distributed environment, services can fail. Hystrix, a latency and fault tolerance library, helps control the interaction between services by adding latency tolerance and fault tolerance logic. It does this by enabling fallback methods and circuit breaker patterns, preventing cascading failures across services.

Intelligent Routing: Zuul and Spring Cloud Gateway

Zuul and Spring Cloud Gateway offer dynamic routing, monitoring, resiliency, and security. They act as an edge service that routes requests to multiple backend services. They are capable of handling cross-cutting concerns such as security, monitoring, and metrics across your microservices.

Distributed Tracing: Sleuth and Zipkin

Spring Cloud Sleuth integrates with logging frameworks to add IDs to your logging, which are then used to trace requests across microservices. Zipkin is a distributed tracing system that collects and visualizes these traces, making it easier to understand the path requests take through your system and identify bottlenecks.

Embracing Cloud-Native with Spring Cloud

Spring Cloud provides a rich set of tools that are essential for developing cloud-native applications. By addressing common cloud-specific challenges, Spring Cloud allows developers to focus on creating business value, rather than the underlying infrastructure. Its integration with Spring Boot means developers can use familiar annotations and programming models, significantly lowering the learning curve.

Getting Started with Spring Cloud

To start using Spring Cloud, you can include the Spring Cloud Starter dependencies in your pom.xml or build.gradle file. Spring Initializr (https://start.spring.io/) also offers an easy way to bootstrap a new Spring Cloud project.

Conclusion

Spring Cloud stands out as an essential framework for anyone building microservices in a cloud environment. By offering solutions to common distributed system challenges, Spring Cloud enables developers to build resilient, scalable, and maintainable microservice architectures with ease. Whether you're handling configuration management, service discovery, or routing, Spring Cloud provides a cohesive, streamlined approach to developing complex cloud-native applications.