Architecture Decision Record

1 minute

21. Publish Service

Date: 2025-01-03

Status

Accepted

Context

To publish Markdown file to Hugo site post section, we want to implement a simple Go microservice.

Decision

As we rely on a microservice Architecture, we will use a web toolkit to ease Go Service implementation.

Consequences

We will integrate Gorilla, and more specifically Mux to implements a request router and dispatcher. We will use Github API to push ADR to Hugo repository.

--- config: look: handDrawn theme: neutral --- flowchart LR PR[Python Service] --Publish--> PU[Go Service] --Push--> GH[Github Repository]
1 minute

22. StateStore

Date: 2025-01-03

Status

Accepted

Context

We want to implement a StateStore component to process Markdown file. In distributed systems, particularly in event-driven architectures, a StateStore is a database or storage solution used to manage the current state of an application. It’s crucial for systems that require consistency, such as event sourcing or CQRS.

Decision

As we rely on Dapr, we will use StateStore component with Azure CosmosDB to implement file state storage between services as describe in Dapr documentation.

1 minute

23. LLM

Date: 2025-01-03

Status

Accepted

Context

We want to integrate a LLM solution to generate a synthesis of MADR. LLM stands for Large Language Model. It’s a type of artificial intelligence model designed to process and generate human language at a large scale. These models are built using deep learning techniques, specifically neural networks, and are trained on vast amounts of text data. They are capable of understanding and generating natural language, making them useful for tasks like:

1 minute

24. Notification

Date: 2025-01-03

Status

Accepted

Context

We want to integrate a solution to notify user at end of process.

Decision

We will use Pusher and Ionic Toast for notification. Pusher is a service that allows real-time functionality integration to web and mobile applications. It supports real-time notifications, chat, and live updates using WebSockets.

Consequences

We will integrate Pusher Python SDK in Python service to send notification at the end of file process.

2 minutes

25. LangChain Integration

Date: 2025-02-22

Status

Accepted

Context

We want to refactor LLM microservice code to be more maintainable, scalable, and follows best practices for working with LLMs. The use of prompt templates, in particular, is a significant improvement for future development.

Decision

We will refactor code based on Langchain.

Consequences

  1. Langchain Integration: The code now uses langchain.llms.Gemini to interact with the Gemini API. This provides a more structured and consistent way to work with LLMs.

1 minute

26. Use a Vector Database to store C4 diagrams informations

Date: 2025-03-22

Status

Accepted

Context

We want to use a vector database to store C4 diagrams informations and retrieve these inforamtion base on a distance function. A vector database stores data as high-dimensional vectors, capturing semantic relationships between items. This allows for efficient similarity searches, finding items with similar meanings or characteristics, rather than just exact matches. Advantages include improved search relevance and enhanced recommendation systems.

2 minutes

27. Integrate RAG to LLM Request

Date: 2025-03-22

Status

Accepted

Context

We want our application relying on a Large Language Model (LLM) to answer user queries based on information stored in C4 diagrams within our CosmosDB vector database. Currently, the LLM’s responses are limited to its pre-trained knowledge, which may not be up-to-date or specific to our C4 diagrams. To enhance the accuracy and relevance of the LLM’s responses, we need to integrate a Retrieval Augmented Generation (RAG) system. This system will retrieve relevant information from the CosmosDB Vector Database based on the user’s query and provide it as context to the LLM.