logo

Go back to Blogs

AI Driven Threat Detection with Microservices Architecture

June 14, 2024 0 Comments

Introduction

AI driven threat detection leverages machine learning models to analyze data and identify irregular patterns or anomalies that may indicate a cyber attack. This approach is further empowered by adopting a microservices architecture, which strategically separates the AI functionality from the main application into modular, independent services. By doing so, organizations can achieve enhanced scalability and maintainability. Each microservice can be developed, deployed, and scaled independently, optimizing resource allocation based on real-time workload demands.

Decoupling AI functionality through microservices not only facilitates seamless integration of new capabilities but also improves overall system flexibility. Updates and enhancements can be implemented efficiently, ensuring the continuous evolution of defenses against emerging cyber threats. This combination of AI driven analysis and microservices architecture provides a robust foundation for organizations to proactively safeguard their digital assets in today’s dynamic threat landscape.

Key Concepts

  • AI Driven Threat Detection:
    • Machine Learning (ML): Utilizes algorithms to detect patterns and anomalies in network traffic, user behavior, and system logs.
  • Microservices Architecture:
    • Service Independence: Each microservice operates independently, allowing for continuous deployment and scaling.
    • API-Driven Communication: Microservices communicate through APIs, facilitating integration and interoperability.
    • Scalability: Services can be scaled independently based on demand, ensuring efficient resource utilization.
    • Resilience: Failure in one service does not affect the entire system, enhancing overall system reliability.

In this blog, we will create two microservices, a Python-based AI microservice that performs anomaly detection using a pre-trained Isolation Forest model [3], and a spring boot based Client AI Microservice that will communicate with the AI microservice to get predictions. The two microservices communicate with each other through HTTP API communication protocol.

A simple representation of our diagram is shown below:

Python AI Microservice (Flask) –  the first step involves setting up a Python Flask microservice designed to serve as the AI model and to interact with the Client AI microservice. This microservice acts as the backed infrastructure, handling requests and responses from the client application.

For the test purpose to detect anomalies, we used randomly generated normal and outlier datasets, which allow us to generate a sample Isolation Forest Model. This model serves as a key input for testing and evaluating our threat detection mechanism.

The sample code is provided below:

# -*- coding: utf-8 -*-
import os

from flask import Flask, request, jsonify
import numpy as np
import joblib
from sklearn.ensemble import IsolationForest

MODEL_PATH = 'model_file/isolation_forest_model.pkl'

app = Flask(__name__)

def training_model():
   try:
       # Generate some synthetic normal data
       rng = np.random.RandomState(42)
       x_normal = 0.3 * rng.randn(100, 2)

       # Generate some synthetic outlier data
       x_outliers = rng.uniform(low=-4, high=4, size=(20, 2))

       # Combine normal data and outlier data
       x = np.r_[x_normal + 2, x_normal - 2, x_outliers]

       # Train the Isolation Forest model
       isolation_forest_model = IsolationForest(contamination=0.2, random_state=42)
       isolation_forest_model.fit(x)

       # Save the model to a file
       joblib.dump(isolation_forest_model, MODEL_PATH)

       print(f"Model trained and saved successfully at '{MODEL_PATH}'")
   except Exception as e:
       print("Error generating model" + str(e))

# Load model function
def load_model():
   if os.path.exists(MODEL_PATH):
       try:
           model = joblib.load(MODEL_PATH)
           print("Model loaded successfully!")
           return model
       except Exception as e:
           print(f"Error loading model: {str(e)}")
           return None
   else:
       print(f"Model file '{MODEL_PATH}' not found.")
       return None

@app.route('/predict', methods=['POST'])
def predict():
   if model is None:
       return jsonify({'error': 'Model not loaded. Please upload the model file.'}), 500

   data = request.get_json()
   if 'features' not in data:
       return jsonify({'error': 'No features found in request data.'}), 400

   x_new = np.array(data['features'])
   try:
       predictions = model.predict(x_new)
       return jsonify({'predictions': predictions.tolist()})
   except Exception as e:
       return jsonify({'error': str(e)}), 500

if __name__ == '__main__':
   try:
       # Generate a sample Isolation Forest Model
       training_model()

       # Load the model on startup
       model = load_model()

       app.run(debug=True, host='localhost', port=5001)
   except Exception as e:
       print(str(e))

Dockerfile – We will use docker to build and run the AI Microservice. Sample Dockerfile is provided below:

FROM python:3.9-alpine

WORKDIR /app

COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt

COPY . .

CMD ["python", "ai_microservice.py"]

Sample requirements.txt file is provided below:

Flask~=3.0.3
joblib~=1.4.2
numpy~=2.0.0
scikit-learn~=1.5.0

Spring Boot Client Service – Next step involves developing a Spring Boot AI Client Microservice that will interact with the AI microservice. The service establishes a REST API communication protocol to effectively exchange data with the AI microservice.

Maven dependencies – The maven dependencies for the client microserves is provided in the pom.xml file:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
   <modelVersion>4.0.0</modelVersion>
   <parent>
       <groupId>org.springframework.boot</groupId>
       <artifactId>spring-boot-starter-parent</artifactId>
       <version>3.3.0</version>
       <relativePath/> <!-- lookup parent from repository -->
   </parent>
   <groupId>com.nextgenerationconsultancy</groupId>
   <artifactId>client-ai-microservice</artifactId>
   <version>0.0.1-SNAPSHOT</version>
   <name>client-ai-microservice</name>
   <description>client-ai-microservice</description>
   <properties>
       <java.version>17</java.version>
   </properties>
   <dependencies>
       <dependency>
           <groupId>org.springframework.boot</groupId>
           <artifactId>spring-boot-starter-web</artifactId>
       </dependency>

       <!-- https://mvnrepository.com/artifact/io.jsonwebtoken/jjwt-api -->
       <dependency>
           <groupId>io.jsonwebtoken</groupId>
           <artifactId>jjwt-api</artifactId>
           <version>0.12.5</version>
       </dependency>

       <dependency>
           <groupId>org.springframework.boot</groupId>
           <artifactId>spring-boot-starter-test</artifactId>
           <scope>test</scope>
       </dependency>
   </dependencies>

   <build>
       <plugins>
           <plugin>
               <groupId>org.springframework.boot</groupId>
               <artifactId>spring-boot-maven-plugin</artifactId>
           </plugin>
       </plugins>
   </build>

</project>

Application Class – ClientAiMicroserviceApplication is the class that contains the main() method which uses Spring Boot’s SpringApplication.run() method to launch our application.

package com.nextgenerationconsultancy.clientaimicroservice.main;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

@SpringBootApplication
public class ClientAiMicroserviceApplication {

   public static void main(String[] args) {
       SpringApplication.run(ClientAiMicroserviceApplication.class, args);
   }
}

@SpringBootApplication is a convenience annotation that adds @Configuration, @EnableAutoConfiguration and @ComponentScan to our main class path.

Threat Detection Rest Controller – ThreatDetectionController is the Spring Boot client microserve that forwards the request to the Flask AI microservice:

package com.nextgenerationconsultancy.clientaimicroservice.controllers;

import org.springframework.beans.factory.annotation.Value;
import org.springframework.http.HttpStatus;
import org.springframework.http.MediaType;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import org.springframework.web.client.RestClientException;
import org.springframework.web.client.RestTemplate;

import java.util.Map;

@RestController
@RequestMapping("/api")
public class ThreatDetectionController {

   private final RestTemplate restTemplate;

   @Value("${ai.server.url}")
   private String aiServerUrl;

   public ThreatDetectionController(RestTemplate restTemplate) {
       this.restTemplate = restTemplate;
   }

   @PostMapping(value = "/detect", consumes = MediaType.APPLICATION_JSON_VALUE, produces = MediaType.APPLICATION_JSON_VALUE)
   public ResponseEntity<String> detectThreats(@RequestBody Map<String, Object> features) {
       try {
           return ResponseEntity.ok(restTemplate.postForEntity(aiServerUrl + "/predict", features, String.class).getBody());
       } catch (RestClientException e) {
           return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body(e.getMessage());
       }
   }
}

Dockerfile – A sample Dockerfile to run our AI Client Microservice (Spring Boot) is given below:

FROM openjdk:17-oracle

VOLUME /tmp

COPY target/client-ai-microservice-0.0.1-SNAPSHOT.jar app.jar

ENTRYPOINT ["java", "-jar", "/app.jar"]

Application Properties – Set the required application parameters inside the application.properties file. In our case, we included the application name, the port and AI microservice url:

spring.application.name=client-ai-microservice
server.port=8443
ai.server.url=http://localhost:5001/predict

Deploying the Services and Retrieving Results

  • Build and run the AI Microservice (Flask):
cd ai_microservice
docker build -t ai-microservice .
docker run -p 5001:5001 ai-microservice

The AI Microservice application will start on port 5001 in our local machine.

  • Build and run the Client Microservice (Spring Boot):
cd client-ai-microservice
mvn clean
mvn compile
mvn package
docker build -t client-ai-microservice .
docker run -p 8443:8443 client-ai-microservice

The Client AI Microservice API will start on port 8443 in the local machine.

To make a prediction, send a POST request to the Spring Boot client microservice, which will forward the request to the Flask AI microservice:

‘’’
curl -H "Content-Type: application/json" -d '{"features": [[2.1, 2.2], [1.9, 1.8]]}' -X POST http://localhost:8443/api/detect
’’’

The AI Microservice will process the request, perform the prediction, and return the result via the Client AI Microservice.

If we use the above curl request to send data to the AI Client Microservice, we receive the following result:

{
   "predictions": [1, 1]
}

This result indicates that our data points are close to the normal distribution, where `1` signifies normal data and `-1` denotes anomalies.

If data points deviate from the expected normal distribution as defined by the features, the resulting predictions or outcomes may indicate anomalies or unexpected patterns as shown below:

Request:

‘’’
curl -H "Content-Type: application/json" -d '{"features": [[0.1, 0.2], [-0.5, 1.3]]}' -X POST http://localhost:8443/api/detect
’’’

Response:

{
   "predictions": [-1, -1]
}

Conclusion

Integrating AI driven threat detection within a microservices architecture offers a forward-thinking, scalable, and resilient approach to cybersecurity. This strategy allows organizations to swiftly respond to emerging threats, leverage advanced AI capabilities, and ensure continuous protection of critical assets. The modular nature of microservices enhances flexibility and fault tolerance, enabling seamless scalability and rapid updates to security protocols.

By adopting this innovative architectural paradigm, organizations can significantly strengthen their cybersecurity posture. This approach not only fortifies defenses against increasingly sophisticated cyber threats but also ensures a robust and adaptive security system that can evolve with the digital landscape.

References

  1. Flask Documentation: Flask is a lightweight WSGI web application framework in Python. For more details, visit [Flask Documentation]
  2. Spring Boot Documentation: Spring Boot makes it easy to create stand-alone, production-grade Spring-based Applications. For more details, visit [Spring Boot Documentation].
  3. Isolation Forest: An unsupervised learning algorithm for anomaly detection. More information can be found in the paper by Liu, Fei Tony, Ting, Kai Ming, and Zhou, Zhi-Hua: [Isolation Forest].
Footer