It's Technology, stupid! SRE, CQRS, Event Sourcing, AI, IoT and Blockchain in the same bucket?

 
This post contains some thoughts about facing Innovation actions in Technology and Business. You'll not find snippets of code. It is  more about discussing strategic ideas to support the next generation of business opportunities in the increasingly highly competitive FinTech sector and how modern technologies can support these ideas.

Here we go...

Innovation & Technology? Are you joking me?

Nowadays the adaptation of Innovation Plans in any sector requires a strong technological point of view. To be clear, how to use Technology to make cutting the edge products and services that can eventually provide an advantage over competitors in a more and more difficult market.



How we use Technology is the key question and very likely the answer will be based on having a pretty well drawn roadmap, a correct identification of technologies, tools, capacities, pros/cons and trained skilled people. Yes, I mean those annoying and weird people we call techies than unfortunately we still need until the next third or fourth generation of automatic AI-based generators of coding/architecture/SRE.

And we'll tell you something: If you still live in Europe working with the JVM/Spring/RDBMS synchronous stack/way-of-life you are actually playing a dangerous game. Your feeling of safety is only an illusion. You are doing the same as the other 3000000 companies abroad but with a cost of man/hour at least 3 times greater (did you know 85% of articles and posts about Spring are coming from India?). Good luck  with your position in the market!!
  • India – Hourly Rate: 12$-30$
  • Vietnam – Hourly Rate: 10$-20$
  • Eastern Europe - Hourly Rate - 25$-50$
  • North Europe - Hourly Rate - better do not ask..
Obviously, we are not talking here about .. QUALITY, that is a matter that can provide materials to write several books. Anyway, if you want to have a look you can see here and below some reports about this topic.



Source: Daxx Average Rates Offshore for Developers
Every company is a software company. You have to start thinking and operating like a digital company. It’s no longer just about procuring one solution and deploying one. It’s not about one simple software solution. It’s really you yourself thinking of your own future as a digital company.

Satya Nadella, CEO of Microsoft

So, if we assume that every company is a software company (we did, right?) we should increase our techie level and play the game more seriously, going out from the comfort zones (and therefore using the obsolete technology you already know) and going into the darkness of modern technologies to find out how the hell can they support our crazy business ideas.



In this post we want to formulate an example of such a way to face the problem of innovation using old and not so old ideas as tools to provide a good, stable, fast and capable tech platform to support and accelerate (and save a lot of money in the meantime) most of the possible business ideas.


The Targets

What are the targets for your business ideas? Corporate clients, supermarket chains, the CIA, the NATO? What about individuals? What about - THEM!



They will not tolerate extraordinary latency, breakouts, outages, monthly cycles to release changes and so on. Do not trust those sweet smiling faces. They are expecting your product breaks to go to your competitor. These are the laws of the Market.

The idea is, we have to be ready to make good products because the targets in the Market have been identified and the rules have changed.

The Involved Terms

We have talked in this Techblog about the main platforms we created for the Fexco Central API, the Delivery (DP) and the Computing (CP) Platforms but they are basically the same idea.

Just as a summary. The DP is the tool that SRE uses to provide IT automation, Continuous Delivery and the expected and really high level of Quality. The CP is, on the other hand, the set of practices, procedures and technologies that allow  fast implementation of software components according to the designed architectural patterns, supported by the DP in terms of Quality and Delivery.

Well, what are the main concepts behind the DP and the CP and how can they help a FinTech company to be a success in the next challenges?

CQRS

Command Query Responsibility Segregation is only a way to dissociate writes (Command) and reads (Query). It means we can have one database for writing and another one for reading data (the views or projections) that is derived from the writing part and can be managed by one or multiple databases (depending on our use cases).



Most of the times, the reading part is asynchronously updated which means both parts are not strictly consistent (welcome to eventual consistency part 2!). We are going to come back on this point later on.

One of the ideas behind CQRS is that a database is hardly as efficient to manage read and write actions and therefore it is acceptable to use a different option for each operation

It can depend on the choices made by the software vendor, the database tuning applied, etc. As an example, Apache Cassandra is known to be efficient persisting data whereas Elasticsearch or Apache Ignite are great for search and reading. Using CQRS is just a way to take advantage of the strengths of a solution.


In the Central API case we use an Operational Data Base (ODB)  for writing actions and a Reading Data Base (RDB) for... well, it is already clear. The ODB is based on 1/N Apache Cassandra clusters while the RDB is based on Apache Ignite (1/N clusters that store data from the ODB directly in memory, ready for SQL queries using a different data model). The approach followed with the ODB and the RDB is CQRS, taking advantage of the features of the underlying technologies.

Event Sourcing

Well, we have written something about it here and here. But basically we can conclude that:

Event Sourcing is just about ensuring that all changes to the application state are stored as a sequence of events

It means we do not store the state of an object. Instead, we store all the events and summarize the state. Then, to retrieve an object state we have to read the different events related to this object and apply them one by one. Easy, right?



CQRS + Event Sourcing?

Both patterns are frequently grouped together. Applying Event Sourcing on top of CQRS means persisting each event on the write part of our application. Then the read part is derived from the sequence of events.

If you are wondering, Event Sourcing is not required when you implement CQRS although the opposite is more than recommendable!

Actually, CQRS is required for most of the use cases when we implement Event Sourcing because we may want to retrieve a given state of data without having to compute N events. It means the reading model will be based on pre-calculations and snapshot generations that pre-calculate for you the product of the events to provide a state. One exception is the use case of a simple audit of operations. In this case, we don’t need to manage views (nor states) as we are only interested in retrieving a sequence of actions in time.


IoT. The Empire of Things Strike Back!

The Internet of Things can be many things and usually we think about wearables, fridges, washing machines, etc. The idea is simple. Given the small size of microchips they can be installed everywhere, opening a world of new possibilities.


But there is something that definitely fits into the IoT approach and you already know: smartphones and small computers, for instance.

Basically think of IoT devices as autonomous environments with a huge computing capacity, capable of providing services with no connection to anywhere.




The Internet of Things involves new concepts about computing regarding the location of the components in the world. It is called the Cloud-Fog-Edge computing scopes.
 
Just as an example, we have already followed the IoT approach for the network of RFX stores, handling each shop as an autonomous environment that involves two echelons, Edge and Fog. Each store has its own database with a subset of data from the Cloud Data Center.
While the Fog level handles the communications with the Central API in Cloud, the Edge devices synchronize their data with the Fog level with minimum latency.

Besides, heavy calculation processes (for instance, reports, statistics, close downs, etc) happen at the Fog level, releasing the Cloud Data Center of a myriad of heavy concurrent computation processes, sending to cloud just the result to be aggregated into the ledgers. It works perfectly but most importantly... it's elegant!!



It is the same stuff supermarkets have been working with for years but using some new hardware, protocols and software. It is not that crazy, right?
Our point is... forget the centralized web application approach! It is slow, it is risky, it is not working well! And we have to allow businesses to work autonomously!

Besides, new API consumers (Fog/Edge devices, smartphones, etc) send information about their computing areas and the devices. How to manage this stuff? The Fexco Central API is multi-protocol and it supports HTTP/HTTP2, AMQP and... MQTT. What the heck is MQTT? Another techie word? Good news everyone! It is! But we'll handle this beautiful protocol in another post. It is enough to know that it is a protocol commonly used in IoT systems to send and receive telemetries and computing information. However, we have to know the hardware landscape and the connectivity status. The security requirements are severe. But anyway, the benefits are huge!

Site Reliability Engineering. Wasn't there a shorter name?

No! Because SRE means a lot. SRE is the result of two worlds colliding. As we have been in the software industry long enough, we are familiar with Operations and Infrastructure and Software Development.

Development and Operations

What happens when making two complex and critical parts of organizations to collaborate tightly? It can crash and become a black hole that eats it all, or it can actually be a success and become a strong light in the company’s “universe” that actually acts as a lighthouse in the darkness.
What exactly is Site Reliability Engineering, as it has come to be defined at Google? My explanation is simple: SRE is what happens when you ask a software engineer to design an operations team. When I joined Google in 2003 and was tasked with running a "Production Team" of seven engineers, my entire life up to that point had been software engineering. So I designed and managed the group the way I would want it to work if I worked as an SRE myself. That group has since matured to become Google’s present-day SRE team, which remains true to its origins as envisioned by a lifelong software engineer.

Benjamin Treynor Sloss - Google's SRE

SRE aims to get together the best of Operations that is the focus on stability, well defined actions, targets and performance metrics and the best of Software Development that focuses on continuous innovation, agility, scripted automated procedures and quality metrics.

 

SRE Badass

Being a little more detailed, SRE pursues the next objectives:
  • Maximizing Reliability
  • Helping design architectures and processes to keep Resilience and Toil as good indicators.
  • Decreasing technical complexity
  • Driving the usage of tooling and common components.
  • Implementing software and tooling to improve resilience and automate operations.
Get more detail about SRE where it was born at the following video from Google Guys: Google's SRE Video Series

AI for Business? That sounds unnecessary and expensive

Read the posts about AI in this blog. They will give you a good picture about how we are working with this. Basically three conclusions from the news:
  1. The evolution of AI technology has made the needed IT much cheaper. Main Cloud providers offer AI services with reasonable prices.
  2. AI is not scary any more for regular engineers. Apache Spark, Tensorflow, etc. are technologies that are more and more frequently found in CVs and skill sets.
  3. No more AI specialized companies. This is a verifiable tendency in the market as AI-only companies (usually recycled from the Big-Data hype) are losing their ground rapidly as regular software companies integrate AI into their development/IT practices.

What is the conclusion? AI is extremely profitable in many aspects and you can get those benefits from predictions about Stock Management, Marketing campaigns, I/O of resources, Future Evolution of the Market and many more!



An example. Try to imagine an executive extracting every morning a bunch of Excel reports from the system (with plain lists from the database, the old-style reports) to manufacture tailored reports. And this happens every working day, every year, for years.



Good worker! And undoubtedly it is an amazing way to waste the time of highly paid executives (please calculate the cost of your time and the invested effort) while AI can elaborate those reports in minutes... but not only! It can also provide predictions and statistics of all kinds you need! It is a typical case of tradition, comfort zone and ignorance! We're sorry, but it's true. Well, this is a typical case of a good investment in AI.

On the other hand, those companies that try to go out from the comfort zone usually  plead the same reasons that stop them from the adoption of AI:
  1. The cost of 3rd Party services
  2. The lack of skills in the Job Market

Well, this is not happening anymore. You can start planning your AI adoption because you'll find the skills in Job Market. And you do not need to go to the so-called AI companies anymore. You do not need them!



Anyway, as a piece of advice do not authorize an AI-based bot with root privileges into your network... for now!

Block... What??

Blockchain has captured the collective curiosity of business and techie lads and rapidly was considered a hype by Mr. Donald Trump.



And it was (he was naturally right), actually as we saw with a relative immediate decline... followed by another hype and so on. It is confusing and the Bitcoin stuff is not helping, I know, as it seemed to turn into a smart way of evading taxes. Just as a note, Blockchain is not based on new technology but rather on a bunch of ideas, practices and existing technologies applied to create something similar to the Bitcoin (that is, an E-Ledger). While the Bitcoin is what it is, Blockchain-based E-Ledgers, when applied to other use cases, happens to address a substantial latent pain that exists across industries:

Blockchain, or more accurately Distributed E-Ledger, is more of a catalyst to inspire change in the way disparate organizations work together in highly competitive markets. Existing inter-company transactions carry enormous costs in process, procedure and crosschecking of records to come to settlement on what could turn out to be a trivial exercise using blockchain technology. In short, Blockchain or distributed E-ledger technologies can provide the next wave of innovation that streamlines the way business operates, the same way the web did, giving birth to a new collaborative economy.

It is an extract from the Bletchley whitepaper (Nov 2017),  a reading that I strongly recommend if you are interested on a lucid proposal about Blockchain-based Corporate Smart Contracts.


 

My Contract is so Dumb!

Do we need a new way of making contracts in general? Possibly not.
Do we need a way to make safer, automatically managed-verified contracts in software-based systems? We do!

The simplest comparison of a smart contract is a conditional (if/else) or even better an example of Gherkin syntax. Basically we define a set of conditions and circumstances that trigger one or several automatic actions:

Given an open workflow is active between members of the G group
When a Seller in the G group sends the bill type "6342927FGHT" to a Buyer
And the validation of funds for the Buyer is OK
Then the funds are transferred to the Seller from the central safe box


For a more narrative definition, Smart Contracts allow for the conversion of human-readable natural language like legal contracts, into computer-readable language. Natural human language is more malleable and subject to interpretation, which makes it ambiguous and more expensive (lawyers, lawsuits, etc). Computer language is more rigid and deterministic, which would make it less flexible, but cheaper (it is supposed the expenses about the legal stuff would be much smaller) and fairer.



Do you think it is something new? Smart Contracts were defined in 1996 by Nick Szabo as part of development in Algorithmics. So, we are talking about Smart Contracts based on Blockchain that is just a way of implement Smart Contracts. That's the difference.
A Blockchain-based Smart Contract is essentially a ledger of itself within the distributed ledger. Tokenized assets exist within Smart Contracts themselves.

Blockchain-based Smart Contracts are not intended to replace existing common law, but to extend them and make it easier for individuals, businesses and eventually computers to make contracts with each other.

There are four important properties in  good contracts:

  • Immutability (Steps and actions are recorded, are immutable -and we can verify this!- as they do not change in time)

  • Observability (Observe they are doing what they said they would do in the contract.)

  • Verifiability (See immutability. We can guarantee from a legal point of view what happened, when it happened and who did it)

  • Enforceability (if someone violates a contract, the parts have the right to take them to court and get some compensation for violating the contract. And we'll need proof)

  • Privacy (Well, the contract is  not public but visible only to the parts and authorities)

Blockchain-based Smart Contracts meet all these requirements across the different solutions in the market we have seen. We'll write more about Smart Contracts in this Techblog pretty soon. The evolution of Smart Contracts has been amazing from the early times of Ethereum. For instance, Microsoft (Bletchley project), Ethereum and others E-Ledgers are providing pretty serious and capable Smart Contracts services.
E-Ledgers providers are ready, they are mature, they make us capable of provide much more and cover many more business areas.

How To Put All Together? (is it really a good idea??)

This is actually the interesting part. We want to make our business ideas greater, bigger, extending them to new markets. We are now aware that we understand what technology is offering to us beyond the old world of slow monoliths, relational databases to-do-always-the-same-stuff in our small miserable shelter located in some place in the comfort area.  And we need a new technological support for all those crazy ideas about the amazing new ways of making businesses.

To allow us to go beyond we have a pretty good approach if we put all the stuff we talked above in the same bucket. Scary, uh?



But not really. It is not even innovative as some important companies are already working in this direction, aggregating SRE, AI, Blockchain, CQRS, Event Sourcing and IoT.
The Delivery and the Computing Platforms are already using these approaches and technologies and therefore they are the best choice to create and support a new generation of business opportunities

This is the world of the Delivery and the Computing Platforms! It is cool, it is amazing and it is ...



 

Comments

  1. Well done guys - a great synopsis of approaches and technologies that are changing how companies adapt to the new reality.

    ReplyDelete
  2. excellent guys, great post, well done.

    ReplyDelete

Post a Comment