I´d like to write some ideas about issues Micro Services issues. All is related to a project based on the development of archetypes for Micro Services with very special attention to Low Latency, communication, low footprint, easiness for deployment, logging, etc.
The first part is about objectives and how I found the general landscape in companies.
The second part ennumerates the main features all MS architecture should have.
The third part is a description of the architecture and features of the MS archetypes.
This time the entry includes some ideas and proposals about the adoption of a new approach for server programming and a new software architecture. Applications and software components in server usually lack high scores on key features as Resiliency, Scalability, Responsiveness as well as an Event-Driven approach. Absence of non-blocking I/O mechanisms and monolithic structure make necessary the adoption of new approaches.
What are we talking about?
I am thinking of an abstraction layer on a strictly
well-defined Programming Model based on main features and specific APIs we need in
order to get an integrated echosystem based on MSA and modern advanced SDLC. Below diagram from the well-known framework Akka can illustrate this:
Other things that have been proposed in the past in companies I´ve worked for are considered as normal right now. Git as a SCM, the rise of JavaScript Single Page Applications and specifically AngularJS, the way to a new SDLC involving Continuous Delivery and DevOps, new ways to architecture focused on decoupled autonomous components.
So, we have now three main factors to consider:
The common current situation is not compliant to modern technology and procedures. Considering three main blocks for classifying all the actors in software delivery we can display below diagram:
Applications are not always ready for scalable scenarios. This task is compleetely delegated to the infrastructure. So, the developer should assume the obligation to produce functionally-compliant code, secure, performative scalable components. And all of these features should be checked by reviewer in the Code Rewiev phase for each feature. This situation requires too much pre-requirements to working fine. The result is, our code is not actually scalable, it’s poorly performative, it’s no appropriately tested (regarding scalability and concurrency) in many projects.
The main goals of this Programming Model are:
The first part is about objectives and how I found the general landscape in companies.
The second part ennumerates the main features all MS architecture should have.
The third part is a description of the architecture and features of the MS archetypes.
This time the entry includes some ideas and proposals about the adoption of a new approach for server programming and a new software architecture. Applications and software components in server usually lack high scores on key features as Resiliency, Scalability, Responsiveness as well as an Event-Driven approach. Absence of non-blocking I/O mechanisms and monolithic structure make necessary the adoption of new approaches.
What are we talking about?
- Scalability
- Resiliency
- Event-Driven
- Responsiveness
- Concurrency
- MSA Ready components (making Micro Services)
- Built throughout modern SDLC (Continuous Delivery and DevOps)
Microservices is an architectural approach relatively recent and fashionable. But most of times architectures based on MSs are not involving common issues with MSs. Auto scalation, service discovery, aggregated log, supervision, monitorization, etc are not often first class citizens in MSAs. Moreover, MSAs are an unvaluable opportunity to bridge the gap between low latency and "normal" latency components. I mean, MSs are (or should be) really small, simple, absolutely straightforward to do only one thing and do it really in the best way. So, developing a MSA system is a great opportunity to include in our MSs High Performance programming.
Looking Ahead (Towards a New Technological Landscape)
We have very commonly found problems in JVM programming. Lock contention, lack of concurrent and atomic operations, mutable objects, excessive dependencies on heavy frameworks, excessive footprint on JVM, etc. All these factors make current applications and software components slow (because of thread unsafe processes) and low responsive. Projects in sectors like Capital Market, Market Inter-Connections, etc use excesivelly complex custom approaches based on strong exploitation of the Low Latency part in the JVM languages' APIs. Such of these projects need strong training and high skilled professionals.
The primary objective is provide projects with high
performance features in all of their components with the minimum cost. Personal developer skills can focus on functional and features development by using the
abstraction layer while the architecture does the hard work included in above
diagram.
Other things that have been proposed in the past in companies I´ve worked for are considered as normal right now. Git as a SCM, the rise of JavaScript Single Page Applications and specifically AngularJS, the way to a new SDLC involving Continuous Delivery and DevOps, new ways to architecture focused on decoupled autonomous components.
So, we have now three main factors to consider:
- High Concurrency components
- High Performance
- MSA Ready components (making Micro Services)
Modern SDLC (Continuous Delivery and DevOps) and distributable components (MSA).
I am convinced the programming paradigm walks rapidly towards the described direction: High performative/versatile/resilient/auto-scalable micro services based on well-proofed SDLC procedures mainly based in DevOps methodologies. So, we need the base for programming and make easier the production of systems based on micro-services architectures built and running on solid DevOps based SDLC processes we need.
We need scalation in new improved ways. By making self
monitored components we can build systems with auto-sclation features.
The other side of the topic is performance. The challenge in
the Programming Universe is reaching the Low Latency Nirvana. To do that,
languages as Java has been including new components in each edition in order to
meet these requirements. Despite of this, Low Latency techniques are a
difficult, complex and with a high learning curve.
The basic conclusion is abstraction. That’s the most common
approach in well consolidated enterprise systems. Unfortunately, abstraction
levels are not easy to understand and too frequently are based on wrapping
objects with some concurrent feature. The result is, the learning curve is
still too high although they get a most consistent code.
The proposed solution is based on integration to a new
paradigm, the Reactive Programming. Then, abstracting this integration with a
easy-to understand, easy-to-learn, simple API. This will allow average
developers focus on developing features from requirements while high
performance code plumbing is working behind the scene.
The common current situation is not compliant to modern technology and procedures. Considering three main blocks for classifying all the actors in software delivery we can display below diagram:
The asynchronous paradigm is almost never applied despite JDK, JEE, Spring, etc include asynchronous utilities as an integral part of their APIs from some years ago.
Concurrency utilities are not commonly used in projects
despite JDK and JEE include them as
integral part of their APIs from many years ago. Other more recent features
like Fork/Join framework in JDK 7 are not used at all.
The JVM footprint is too big in most of projects. There are
a lot of examples with several GB of assigned memory and severe problems of
memory conssumption.
Our applications are not specially responsive and times for accomplish actions or request resources often are too high.
Deployment are usually manual and only tools in clients networks provide a new scenario. Nevertheless, these tools, planned to automatize deployments, are usually manually used.
We are still in the world of monolithic applications, where one single package contains all the components. This approach is even followed until unexpected results.
Furthermore, applications architecture usually contain a multiple layers and data object converters, making even longer times of computing actions.
Architectures are almost never directly scalable. They are based exclusively on high availability criteria provided by the infraestructure.
No auto recovery mechanism is implemented.
No infrastructure component is part of the project.
Rarely database components are included in the project and handled in a specific way (version control, releasing, deployment to databases).
Integration patterns are only followed when is mandatory because of some architectural component provided by the client (e.g. Message Queue engines).
The situation of procedures related to the software development life cycle are mainly based on the Continuous Integration (CI) methodology. Despite of the last and strong efforts in this way, CI is far from be extensively implemented or adopted in projects.
Deployment are usually manual and only tools in clients networks provide a new scenario. Nevertheless, these tools, planned to automatize deployments, are usually manually used.
We are still in the world of monolithic applications, where one single package contains all the components. This approach is even followed until unexpected results.
Furthermore, applications architecture usually contain a multiple layers and data object converters, making even longer times of computing actions.
Architectures are almost never directly scalable. They are based exclusively on high availability criteria provided by the infraestructure.
No auto recovery mechanism is implemented.
No infrastructure component is part of the project.
Rarely database components are included in the project and handled in a specific way (version control, releasing, deployment to databases).
Integration patterns are only followed when is mandatory because of some architectural component provided by the client (e.g. Message Queue engines).
The situation of procedures related to the software development life cycle are mainly based on the Continuous Integration (CI) methodology. Despite of the last and strong efforts in this way, CI is far from be extensively implemented or adopted in projects.
Can we change this scenario?
The SDLC Evolution initiative is based on the progression from a pure Continuous Integration approach to another one more complete, comprehensive of all of above described factors. By basing the new SDLC on Continuous Delivery and DevOps methodologies we can offer the most appropriate life cycle for components based on new architectures. That’s the primary reason of all. We should create an evolved SDLC to serve as the main methodological platform based on new architecture. We consider the Micro-Service Architecture as the potentially most successful for us. These MSA components need a new Programming Model, a new way to develop them. And that’s the center of this proposal.
This approach is mainly based on Micro Services. This allow us to design
distributable, scalable, decoupled components. Moreover, other adavantages of
MSA are the ability to self recover, self-scalation, self-healing, etc.
All advantages have counterparts. Components are simpler but
general complexity in projects grows. Tasks as profiling, monitoring, measuring
load, metrics at diverse levels, etc should be a common aspect of projects.
Despite of this, the general technological level is better,
projects are more efficient and with a higher ROI when the whole landscape is
mature.
Single
Page Applications are coneived to work as Micro Services in the client side. They are
directly integrable in Micro Services architectures by using message based communication (eg STOMP JSON messages on websockets).
In this aspect the life cycle cannot be limited to code
integration. Continuous Testing and Continuous Delivery methodologies are key
and totally needed. DevOps goes beyond by integrating Infrastructural
operations to the rest of components in the project.
This evolved SDLC makes possible changes in architecture.
Changes in architecture makes possible changes in Programming. Changes in
Programming will make possible improved systems and applications.
These approaches mean a strong organizational culture
change. But processes, tools and
methodologies are well-tested. Experience from real project is
invaluable.
We produce systems and applications commonly in a single big
deliverable. This is a big problem when changes in a little part mean a whole
new version release. This is a very risky way.
On the other hand, there is not integration with dabase
components and Infrastructural issues are completely absent.
This a common problem and is caused by three main reasons:
- JVM programming does not involves concurrency issues. Foir example, in webapps concurrency is unavoidably delegated to Application servers or Containers. This is not a solution and general performance problemas on these type of applications are extremely frequent.
- Architectures are based on synchronous stateful flows necesarilly low regarding performance, responsiveness and throughput.
- This means, companies based on this typoe of architectures usually are diabled to compete in the market of production of low latency systems.
Company frameworks should be a guide towards a Programming
model, the best architecture and a good SDLC. Nevertheless, available server
architectures usually try to occupy more and more space in the programming
sphere of competences in a project, impose a given common architecture and not
provide with a SDLC. Project developers
are not reliable? As a direct
consecuence, we have the Big Mac solution (the same product for all zones of
the Globe).
Overengineering is the root cause. By some reason framework
designers invade project specific areas. The direct result is:
- Overcost of the framework in features. Expenses are focused on infra-used features and functionalities.
- Lack of adaptation to specific scenarios. At last, the framework turns into an imposition to projects.
- Exhaustive testing practices are needed. Generally they are not present because cost of overengineering.
- A specialized SDLC is dramatically needed. . Generally it is not present because cost of overengineering.
- Very specialized training is terribly needed. Same as above.
Overengineering is an excelent budget killer. So, the
proposed framework is based only on the construction of the Abstraction layer
and the MSA features. By delivering services as components, projects can module
the use of the framework in each service. No pre-defined architecture is
imposed. Deadlines of the development of the proposal are well defined and
further evolutions can be perfectly planned in short, well.defined cycles.
Testing is a must and it is a strong part of the proposal,
having requirements as testable stories and reflected in Functional Testing
phase. Non Functional requirements are similarly handled. The evolved SDLC
comprises all these testing parts in the same building cycle.
Projects are free to adapt a free programming practice by
using the Abstraction Layer as a direct way to make apps and systems resilient,
scalable and concurrent. No matter they are creating a webapp or a Low Latency
system.
We mean the difference between webapps and non webapps
projects. This seam does not make sense. The programming model in web
architectures are usually based on heavy IOC frameworks. These projects contain
all the components for running in servlets containers with a little subset of
JEE specifications (Tomcatization). When running on capable Application Servers
(JEE fully compliant) (e.g. Weblogic, WildFly) they are not taking advantage of
the services and features these compliant Application Servers provide. This
situation has focused web projects towards architectures totally different and
not compatible with other systems. As a direct aftermath, programming
approaches diverge to a number of meaningful differences.
On the other side, not webapp projects (e.g. Low Latency
systems) are based on intensive use of complex concurrency, I/O techniques beyond the range of “average”
developers. They work directly on JVMs and their main feature is the provision
of extremely complex custom libraries.
Overcoming this
seam, that I call "High Performance Programming Model" is a
way to create software components with the same focus: Low latency micro services with a
strictly delimited set of functions and communication conditions, for strictly businness functions as well as for webapps!! Running as
services (totally decoupled, different process, based on dumb communication,
etc) in JVMs, no matter where they are working, or as dependencies (libraries
in the same process) services contains a unique programming approach independently
of the running scenario.
And Spring Boot is not the only way to create them!!!
Do you remember NodeJS and Express? You could create the web server in five lines. The sme with livereload. Oh lads, it´s JavaScript. Well, these are several examples in the JVM:
And Spring Boot is not the only way to create them!!!
Do you remember NodeJS and Express? You could create the web server in five lines. The sme with livereload. Oh lads, it´s JavaScript. Well, these are several examples in the JVM:
- Jetty exists!
- Do you know Camel uses asynchronous Jetty servlets for communications?
- SBT uses a container you can use from Scala components?
- Akka for Scala uses its http libraries based on SBT. So Spray does the same.
But inside of each MS we need a programming model as the core for
the abstraction layer. This layer does the work in an efficient way and it will
be described later. The goal of the proposed Programming Model is provide donde
trabajandevelopers with a easy-to-understand Abstraction API that allows them
interact with complex Low Latency components avoiding difficult High
Performance programming techniques.
Nihil Obstat. Of course, programmers can use directly the
base framework if they feel they have
skills enough to do Functional Programming based on Streams. The Programming
model in the Abstraction Layer is the interface but it’s not mandatory. It
should not be!
There has been a topic going around for several years in the programming community that “average
programmers” don’t get functional programming (often from functional
programmers) or that “functional programming is too complex for the average
programmer” (often from those unwilling to learn functional programming).
It is being evident
for last years that functional programming is an absolutely crucial evolution
precisely because a large subset of us are average or below average. We
understand the definition of“average” in a not despective way. Average is the
middle level for naming programming
quality in a company.
So, the Programming Model is the set of classes available in
the Abstraction API developers use to implement requirements. The main parts of
the Programming Model are integrable to pre-defined frameworks or other Programming
Models as Spring, JEE, etc.
The main goals of this Programming Model are:
- Never think in terms of shared state, state visibility, locks, concurrent collections, thread notifications and so on.
- Simple workflow by organizing the specific app code keeping policy decisions separated from business logic.
- Based on messaging.
- Hide Low level concurrency mechanisms.
- Interaction with MSA support features.
In the next part I´ll include the main features I'd include in Micro Services basic archetypes.
Cheers!
Cheers!
Comments
Post a Comment