Oswald Regular
OpenSans Regular
The Co>Operating System
The Ab Initio engine

Rules, dataflow applications, and orchestration plans are by themselves just graphics. It is the Co>Operating System that brings them to life.

The Co>Operating System is the single engine for all processing done by Ab Initio’s technologies. Its capabilities are unrivaled and, because all other Ab Initio technologies are built on top of the Co>Operating System, they inherit all of those capabilities in a consistent manner. That’s what happens when a system is architected from first principles!

Key capabilities of the Co>Operating System include the following:

  • There is no limit to the size or complexity of applications.
  • There is no limit to the number or complexity of the rules inside these applications.
  • The Co>Operating System runs on Unix, Linux, zLinux, Windows, and z/OS.
  • The Co>Operating System provides unlimited scalability for processing very large amounts of data in limited amounts of time. Its extremely high efficiency means that it requires far less hardware than alternatives.
  • The Co>Operating System can run applications across networks of servers, each of which can be running a different operating system.
  • The Co>Operating System natively speaks complex data formats and structures, including legacy and international data.
  • The Co>Operating System provides an unprecedented degree of robustness and reliability.

Click here for more in-depth information on the Co>Operating System.

The Co>Operating System provides unlimited scalability

The Co>Operating System’s architecture takes a simple approach to making business applications scalable. You start by laying out your application graphically. You then place parallelization components at the points of the application that need to scale. When the application runs, the Co>Operating System will take the scalable sections of the application and replicate them across multiple CPUs and multiple servers, as desired. Each of these replications, called a “partition,” will get a subset of the original data to process. The more partitions there are, the more the application scales. The diagram below shows how different parts of a single application might be partitioned to run across different numbers of CPUs:

1. On the Surface

2. What's really Happening

There are, of course, many details to getting scalability right. If a single detail isn’t right, the system doesn’t scale. Ab Initio has sweated all those details so that you don’t have to – details necessary for applications to process tens of billions of records per day, store and access multiple petabytes of data (that’s thousands of terabytes), and process hundreds of thousands of messages per second. That said, to process thousands of messages per second, or gigabytes to terabytes a day, there is no substitute for experienced and sophisticated technical developers. Ab Initio enables these people both to be remarkably productive and to produce systems that truly work.

The Co>Operating System is a distributed processing system

Large enterprises inevitably have a mixture of servers distributed across a network. Getting these servers to cooperate on behalf of applications is a challenge. That’s where the Co>Operating System comes in. The Co>Operating System can run a single application across a network of servers – each server running a different operating system. The Co>Operating System makes all these systems “play nice” with each other.

For instance, an application can start with components on the mainframe because that’s where the data is, run other components on a farm of Unix boxes because that’s where the compute power is, and end with target components on a Windows server because that is where the report is supposed to end up. The fact that this application spans multiple servers is irrelevant to the developers and the business users – all they care about is the business flow and the business rules. The Co>Operating System knows where the work is supposed to be performed, and does it there.

The same Co>Operating System engine does real-time, service-oriented architectures (SOA), and batch

It’s a real hassle when an application was built to run in batch mode and now the business wants to go to real-time transactions. Or when the business decides that a real-time application needs to process very large nightly workloads, but it can’t, because a real-time architecture can’t make it through millions of transactions in just a few hours. In both cases, the same business logic has to be reimplemented with the other methodology, and then there are two different and incompatible underlying technologies. Twice the development work, twice the maintenance.

Not so with the Co>Operating System. With the Co>Operating System, the business logic is implemented just once. And then, depending on what the logic is connected to, the application is batch, real-time, or web service-enabled. The same logic can be reused in all those modes, generally with no changes. All that is required to build applications that can span these different architectures is Ab Initio’s Continuous>Flows, the real-time processing facility of the Co>Operating System.

Ab Initio’s real-time capabilities include:

  • Reusability of logic between batch and real-time applications
  • High performance in all execution models (hundreds of thousands of messages per second in real-time)
  • Robust connections to messaging systems, including all standard products, and ability to handle even proprietary information buses
  • Native support for service-oriented architecture (SOA) and every data format, including XML
  • XA support (if you really need it – you might not, because Ab Initio’s checkpointing system is far more efficient)
  • Fail-safe handling of infrastructure problems and/or data errors

Learn more about Ab Initio’s real-time processing capabilities and Continuous>Flows.