Gabrielle: I first heard about Information Balance (Infobal) at CTUG in October. Is Infobal just new to the HPE NonStop space, or is it a new company?
Gabor: Actually, we’ve been in business for about 25 years, based out of Toronto—I was one of the three original founders in 1989 (myself and one other are still active in the company). For the first 15 years, we were focused on IT consulting and services, and then in the late ‘90s, we dove head first into the Y2K space. Part of that adventure was to develop some software tools that support Y2K, remediation, and verification. This led us in a new direction in the early 2000s, so we divested our consulting practice and for the last 10 years we’ve focused on software development. About seven or eight years ago, we released a product called IB Application Road Map (ib-ARM), which is a bottom-up application portfolio management solution, and this has been our main focus since then. In terms of HPE NonStop, we have only been supporting this platform for a little over a year, and we already have a few NonStop customers.
Gabrielle: Could you tell me a little bit more about ib-ARM?
Gabor: You can think of ib-ARM as an automated systems documentation or application discovery solution: we take customer-developed source code, parse it, and show the semantic relationships. We also provide a lot of functionality to work with that information. This was originally developed to support mainframe, client-server, and web platforms, but about a year ago we extended it to the HPE NonStop platform. Our target customers are large insurance or financial services companies—organizations with large IT shops and large in-house-developed portfolios, so some of our installs have 25, 30, even 50 million lines of code in a repository. Typically, Tandem companies are smaller in the volume of their applications, but one of our large banking clients had a significant Tandem install, so that was part of the reason why we expanded into the NonStop market.
Gabrielle: Could you give some more detail on this project?
Gabor: Sure, so one of the major Canadian financial institution had a need to provide its IT staff with a powerful, platform-independent tool that could do systems documentation, analysis and search across its entire systems portfolio. This need was driven by their realization that much of their up-front systems development and maintenance work involves repeated analysis. On multiple occasions they would have to search for and assess existing application functionality; and then evaluate the scope and impact of contemplated changes in terms of development, testing and ongoing operations.
Given the highly integrated and heterogeneous nature of their systems portfolio, the analysis would span many applications, technologies and platforms; and they knew that having a single, centralized tool would drive significant productivity improvements for the 200+ staff. The bank issued a formal RFP with over 350 well-defined requirements, and invited us and several APM and application discovery vendors to respond. We successfully landed the engagement, conditional on the successful delivery of a proof of concept (POC). This POC, however, was something that we had never done before: it was not a mere one or two million lines of code to demonstrate functionality--it was a full-blown implementation of 30 million lines of code involving a series of technologies across mainframe and web platforms, with a large number of custom requirements thrown in.
We successfully completed the POC by the end of 2008, and ib-ARM was rolled out to production throughout 2009 in four separate phases. At the bank, various internal productivity benchmarks confirmed a complete payback of their investment within a six month timeframe. Then from early 2010 to the end of 2011, the ib-ARM install was expanded to include several new applications on additional platforms, including HPE NonStop. And in 2012, a second ib-ARM install was added to house their various credit card applications. The bank now has a total of 50 million lines of code implemented in ib-ARM.
Gabrielle: What are some other use cases for this product?
Gabor: ib-ARM is especially applicable to companies that are looking into application integration or migration because these projects start with getting a good sense of what you have in place already. Our software does a good job of taking inventory and capturing this picture in a way that can be inquired and discovered. It also offers a lot of application discovery capabilities, including search capabilities. There are really two major use cases: in the first, ib-ARM basically provides an inventory, so it gives you all of the components, all of the semantic relationships, traceability from frontend to backend, all the mapping of the data accesses. It essentially gives insight into your application structure and gives a complete inventory, so it provides application mining and discovery capabilities. Instead of a manual exercise to get your arms around it and document what you have, the tool gives this to you in an automated fashion. In the second use case, ib-ARM helps customers to make a decision about whether to continue to maintain the Tandem platform or to replace it.
Gabrielle: Are there similar application portfolio management solutions out there?
Gabor: There are other APM solutions, but as far as we know, none of them support the NonStop platform.
Gabrielle: What’s the story with BRX (business rules extraction), assuming that BRX also runs on NonStop?
Gabor: BRX extracts business processes and business rules from applications that have been captured and processed in an ib-ARM repository. It is currently in beta, and right now we’re at the tail end of development. Our focus with that product is in COBOL, so it is somewhat limited in terms of its technology scope at the moment. If a Tandem user has COBOL on their NonStop, then it would apply, but typically the focus is large mainframe applications.
Gabrielle: What will you be working on over the next year?
Gabor: Well, for the past couple of years we have been dedicated to expanding ib-ARM to more mid-range and large scale mainframes, and at this point there aren’t a lot of technologies out there that it isn’t compatible with—we currently have 90+ parsers in production. The goal now is just to continue to develop the product further and to provide specific solutions that will address the growing compliance requirements and challenges in the US and Canada. This is especially important with regard to personal information, and we are currently working with a customer to provide automated support for protected health information, including for audits understanding how health information might be used in their applications. Our efforts this year will be on creating these business solutions using ib-ARM, and helping customers with compliance to HIPAA, PCI, and other standards. This requires more detailed semantic parsing and tracing, from screen field level access all the way through the various layers to the backend database files, so this requires us to dive deeper into the code and the connections between the frontend and backend, where personal information might be involved. These new components will be included in product.