Retail Banking Modernisation: A Case Study on Transforming Customer Experience
gravity9’s mission is to accelerate our clients’ digital journey through the modernisation of systems and creation of new digital products.
The goal of most organisations is to improve customer offerings and be more and more efficient in their operations. Nearly all companies operate with some type of legacy system that restricts innovation and which is often the core of the organisation. To solve this problem, gravity9 introduces processes of digital decoupling that enable clients to incrementally migrate from legacy systems while acknowledging their critical role. This approach allows our clients to mitigate the risks connected with change to business-critical systems while at the same time leveraging new technologies to solve the problems of tomorrow.
Problem Statement
Our latest assignment was to design and develop the foundation for the offload of data from a core retail banking platform into MongoDB. Our client had tried to address performance issues with their core banking platform, Temenos T24 backed by Oracle, by offloading frequently accessed data from this system into an Oracle database. This approach did not give them the performance or scalability needed to serve their customers, as the Oracle-based offload platform they created was used both for analytical and transactional operations and was unable to scale to the needs of both.
For example, if analytical workloads that were intended to run overnight spilled over into the business day then customers might have some banking transactions rejected because the offload platform would not respond fast enough to satisfy SLA of the banking transaction process.
In addition to the scalability problems with the Oracle-based offload platform, the client found that the platform was not easily maintainable and extendable with most of the logic for business processes written in the stored procedures.
The data model within the core banking system was not inherently relational, which added to the complexity of the offload platform. All data tables within the offload platform had just two columns:
- RECID – for the ID or Unique Key of the Record
- XMLRECORD – to store the data.
This field is a string where fields are separated by field markers, value markers and sub-value markers. Example value of this field is: “<a1>value1</a1><a2>value2</a2>”.
The goal of the project was to propose and implement a migration strategy to gradually move data from the old datasource into MongoDB, which provided better scalability and a more flexible document data model, along with migrating the business logic code coupled with that data. The key requirements for the new solution were improved performance to provide a better experience to our client’s customers as well as increased maintainability and extensibility.
Proposed Architecture
We applied a digital decoupling approach to create message-driven microservices on top of the clients data. This approach is based on a continuous migration process by replacing one part of the old offload system at a time instead of doing everything in a one-pass ‘big bang’ approach. In this way we could build individual systems as fully scalable microservices built around each business domain with their own database if needed. That enabled rapid, agile development which continuously evolved, while legacy systems still operate behind the scenes.
To gradually get the data from the core banking system we used a CDC pattern on the underlying Oracle database to publish all data snapshots that were changed by the core legacy system in real time; every time the data changed in the core system, an event was generated to communicate it.
We implemented an application to listen to those changes, perform a transformation from the raw xml format of the core banking system into a structured message format, and then reshape this as needed to send it as a message for other consumers. Other consumers could use that message for processing or store it in MongoDB which was a new datastore introduced to replace the old database.
To be able to transform raw xml data we introduced dictionaries which described field markers and structure of the object which should be a result of a transformation. Dictionaries were stored in MongoDB and also published as messages using the same CDC pattern.
Technologies Used
Our solution was implemented as a Kafka Streams application that transformed the original data from raw xml format to a more structured avro model. The original data snapshots were published to the Kafka topic using Oracle GoldenGate CDC mechanism. The definition of how to transform the data was stored in a MongoDB collection, from where it was sourced to Kafka topic using MongoDB Kafka source connector. The transformation output was defined by Avro schemas. The cleanup policy for this topic was set to compact in order to ensure that only the newest mapping for each table (entity name was used as key for the topic) is available.
In the Kafka Streams application, the mappings topic was stored as a global table, so multiple topologies, transforming different tables, could use it. The transformed data was then published by the Kafka Streams application into another topic from where it could be consumed by other applications or synced directly into MongoDB collection using MongoDB Kafka sink connector.
Since the Kafka Streams application making the transformation was written in a generic way, it is able to transform any data as long as the provided avro model and mapping definition is using supported data types. Thanks to this, it was possible to use the generic Kafka Streams application as a foundation for more domain specific solutions that would start with transforming data into avro models and then perform joins and mapping on Kafka topics to finally publish data shaped to specific microservice domains.
To expose data for specific domains we implemented a simple REST API based on Spring WebFlux framework. To take full advantage of the reactive stack we used Spring Data Reactive repositories to access data stored in MongoDB.
Project Outcome
The implemented solution covered only a small part of the data stored in the client’s original Oracle-based offload system, but thanks to applying a strangler pattern like approach, it not only possible to move performance critical data to a new more scalable solution, at the same time it can be used as a foundation for future migrations. Additionally, the implemented solution shows that migration can be done both efficiently and safely.
While new functionalities and migrated use cases use the new MongoDB-based offload, improving performance and scalability, remaining use cases can still work from the older system without any changes. Choosing MongoDB Atlas turned out to be a good decision as it succeeded in achieving all the performance and scalability goals where the previous Oracle-based approach had failed.
References:
Kafka Streams: https://kafka.apache.org/documentation/streams/
Apache Avro: https://avro.apache.org/docs/current/
Kafka MongoDB Connectors: https://www.mongodb.com/docs/kafka-connector/current/
Spring Reactive Processing: https://spring.io/reactive