ADDM 10.1

Tech Tip: BMC ADDM 10.1

Written by Colin Vozeh, Director of Sales-Enterprise Services, Datatrend Technologies, Inc.

In December 2014, BMC released version 10.1 of Atrium Discovery and Dependency Mapping (ADDM), an enterprise technology discovery tool. This new version included some highly anticipated new features, along with some thoughtful surprises. We’ve highlighted just a few of these new features with a discussion of their benefits to you here, and we will continue to provide updates throughout the year.

Storage Discovery

Probably the most eagerly anticipated new feature in ADDM 10.1 is the ability to discover SAN and NAS devices in significant detail and place them in context with your other technology infrastructure like servers and applications. The information provided by ADDM discovery of storage devices equips us well to make good business decisions for the future.

Let’s face it: as much as we love enterprise technology, it’s just an investment – an investment we wouldn’t make if we didn’t absolutely have to. But quantifying the return on that investment – or even identifying the value in the first place – can be difficult to do at the best of times. With solid ADDM discovery information coupled with a thorough mapping of your applications, you’re most of the way there. But until now, storage has been a bit of a wildcard.

SAN and NAS systems by their very nature are shared systems: centrally managed services provided to users throughout each organization. As such, the management solutions provided by the vendors are targeted to the high-level view: how much storage do I have, where is it, and how much is being used? But storage is also an expensive investment, and it’s often not clear who’s using it, or why we even bought it in the first place. What value does all this storage provide to us?

ADDM 10.1 uses Web-Based Enterprise Management (WBEM) credentials to discover SAN systems via either their storage management software or a Storage Management Initiative – Specification (SMI-S) provider (bypassing the management software altogether), and SNMP credentials to discover NAS devices. In both cases, ADDM can get detail down to each individual disk for each storage device. When coupled to the World Wide Port Number (WWPN) gathered from each discovered host using those storage systems, ADDM can make good inferences about which hosts are using file systems related to each storage partition.

This is valuable, because we can see clear relationships between every byte of storage, and the business processes that require that storage to provide their value. With our applications properly modeled and strong discovery, we can tell exactly which applications are using which file systems, and which storage partitions in turn.

Here’s a screenshot from the recent BMC webinar detailing the new features in ADDM 10.1. It clearly shows an application supported by a SQL database, which is in turn using several different storage systems.


And because each file system is related in ADDM to each storage system, we can tell just how much is actually in use by each application:


Here’s a list of the supported storage platforms:

  • EMC Clariion, VNX, Symmetrix
  • NetApp
  • Hitachi AMS, HUS, USP-V, VSP
  • HP StorageWorks, P6000, EVA, 3PAR, DotHill P2000
  • IBM DS6000, 8000, SVC, Storwise v7000/v3700

If your system is not on that list, be patient – BMC is already working on updates, and additional systems from Dell, EMC, and IBM are already in the pipeline, and these updates will be provided similarly to TKU updates, so you won’t have to upgrade your entire ADDM installation. Again, contact your BMC account representative and let them know what systems you’re looking to discover. BMC engineers are always interested in your input.

And lastly, while the great news is that there is an out-of-the-box set of synchronization patterns for this new data into Atrium CMDB, not all of the data is synchronized. File Systems and Storage Volumes for example, are not part of the data that makes it over – but this may be because of volume considerations. It may just be too much data to move. Here’s a list of classes in CMDB populated by storage data from ADDM:



Be careful with your triggers!

ADDM does lots of things behind the curtain with TPL, even when we think we’re being explicit about model behavior in our code. For example, when we trigger a pattern off of a DiscoveredProcess, and in the body of that pattern create an SI using the model.SoftwareInstance function, ADDM will automatically create the “Primary Inference” relationship between those two nodes, described thusly:


Note that the “Primary” inference relationship is distinct from the “Contributor” or “Associate” relationships, which mean different things.

Similarly, if after a period of time, we fail to discover that process anymore, the SI will age away and eventually be destroyed. ADDM does this automatically, too – just because we triggered off a DiscoveredProcess and made a SoftwareInstance.

This process is called the Node Lifecycle, and it describes how objects in ADDM come into existence, and are then either updated or retired. The same thing happens when we trigger a pattern off of a SoftwareInstance, and in the body of that pattern create a BusinessApplicationInstance. The SI is not automatically given the SoftwareContainment relationship to that BAI, but it does receive the Primary Inference relationship:


With this relationship, ADDM knows how to maintain the objects created in that pattern: generally speaking, the BAI will age at the same rate as the primary inferring SI, and will be destroyed when that SI is destroyed. This is helpful because we can control what elements guarantee the existence of the application, which satisfies the ADDM concept of provenance.

But this automatic behavior only works for certain cases like the examples above. Fortunately, they’re the most common situations for application modeling. But what happens when everything isn’t so neat and tidy?

There is a temptation in any application modeling effort to get creative with the provenance of a particular BAI. For example, we may wish to trigger our pattern off of a SoftwareComponent node (e.g. a Microsoft IIS Web Application, or a J2EE WAR File), or a DatabaseDetail. When we can be specific about the elements that guarantee the existence of that BAI, it seems logical that we should use those to the exclusion of other more prosaic node types, like say SoftwareInstance.

The problem, however, is that when we create a BusinessApplicationInstance in a pattern triggered off of something that is NOT a SoftwareInstance, ADDM goes dumb and doesn’t understand how to maintain the model anymore. All that automatic behavior for updating, confirming, and destroying the BAI goes out the window. If that triggering SoftwareComponent naturally is destroyed through normal discovery, that BusinessApplicationInstance will still exist, and this has thoroughly broken provenance.

This is bad. But what can we do about it? We encountered this exact situation during a recent application modeling effort with a large customer in the financial industry. Patterns were being triggered off of the best information we were able to discover: a web application SoftwareComponent node. It seemed logical, and it worked fine. Until that particular web application was migrated to an entirely new server. We knew the application still existed – it had plenty of components on lots of different hosts. But even though the web application SoftwareComponent wasn’t on the host anymore, the host still shared the HostedSoftware relationship with that BAI:


Frustrating! But we were able to solve the problem. From the Node Lifecycle documentation page for BusinessApplicationInstance:

“If a Business Application Instance node is created by a pattern triggered on a node kind other than a Software Instance node or Business Application Instance node, BMC Atrium Discovery has no automatic removal behavior. Patterns must be used to explicitly destroy any such Business Application Instance node.”

Aha! The patterns necessary to explicitly destroy objects based upon given conditions are called removal blocks, and they have their own page in the ADDM documentation:

So, using a removal block, we can create a set of criteria that will provide a new set of lifecycle rules for any BusinessApplicationInstance we want to trigger off of something that isn’t a SoftwareInstance node. There are a few different techniques for utilizing removal blocks, and they can destroy nodes either before they would otherwise be destroyed according to the Model Maintenance rules, or afterwards. Maybe I’ll cover these in the next Tech Tip!

I hope these first few tips for ADDM have been interesting and helpful. Good luck, and Happy New Year!

For more information on ADDM or other BMC tools, please contact Warner Schlais, President of Technology Services, at