Q1 2013

The CEO’s Corner

I hope everyone had a great set of holidays and is ready for this new year. On behalf of the entire Datatrend Technologies team, I want to thank our clients and trading partners for the great partnership in 2012.

[ Read the full article ]


Customer Spotlight

In each edition of TrendSetter, the Customer Spotlight segment is designed as one way to facilitate networking by and between appropriate Datatrend clients and partners, and a way for them to illuminate their offerings to the marketplace. Networking is a great way to develop new relationships, clients, and friends, and we hope this segment is of value to our readership. In this issue, our spotlight segment features one of Datatrend’s Network Services customers, Clean Harbors.

[ Read the full article ]


BlueStripe’s Application and Transaction Monitoring Product is First to Track Individual Transactions Without Code Changes, APIs, or Appliances in FactFinder v7.0

BlueStripe Software announces the availability of FactFinder v7, the first transaction performance and availability monitoring solution to trace individual transactions end-to-end across enterprise application systems without the need to add or write any new application code.

[ Read the full article ]


PureData System for Analytics, Powered by Netezza

The term “appliance” is liberally used by many vendors in the big data space these days. It seems that almost everyone has latched onto the term and it is being used not only to define data warehousing and analytic product offerings, but also to subtly (or not so subtly in some cases) set customer expectations about the underlying ease of deployment and ongoing cost of ownership associated with the product.

[ Read the full article ]


High Speed Internet Access in HospitalityAt one time hotel guests viewed Internet access as a nice amenity to have, if they could get it. Now they are practically demanding it, and many are basing their decisions on where to stay on the quality of HSIA (High Speed Internet Access) they can obtain.

[ Read the full article ]


Tech Tip: Upgrading to BMC ADDM v9.0

Last month I described the new function and features that can be found in ADDM version 9.0, and at the end of the tip I noted that because the change also involved a new version of the underlying OS that for the most part an upgrade path was not available. This month I will describe in better detail the choices you have, upgrade or migration, and the consequences of those choices along with my vision of the procedure for the migration process.

[ Read the full article ]


The CEO’s Corner

Mark Waldrep

I hope everyone had a great set of holidays and is ready for this new year. On behalf of the entire Datatrend Technologies team, I want to thank our clients and trading partners for the great partnership in 2012. There is a lot of competition that we compete against; there are many options to consider within the IT industry with respect to solution providers whether hardware supplier, infrastructure capacity planners, network infrastructure services providers, and/or technology services vendors. Datatrend appreciates your patronage and again, we thank you!

One of my recurring themes focuses on the importance of collaboration with our clients well ahead of planned initiatives. In this way we can provide our resources to work with you to identify critical path issues, minimize project risk and develop the best possible solutions.

We are happy to visit your office, engage with your teams in discussions, white boarding sessions, review scope, and plan out the projects. This type of collaboration frequently ends up reducing costs and helping projects run more smoothly.

As the CEO and cofounder of Datatrend Technologies, the buck stops here. If you have any suggestions as to how Datatrend can better serve you, please drop me a line at mark.waldrep@datatrend.com. I hope you enjoy this issue of TrendSetter and should you have any content suggestions for this publication, I would be interested in that as well.

Respectfully,

Mark Waldrep
CEO Datatrend Technologies
mark.waldrep@datatrend.com

[ back to top ]


Customer Spotlight

In each edition of TrendSetter, the Customer Spotlight segment is designed as one way to facilitate networking by and between appropriate Datatrend clients and partners, and a way for them to illuminate their offerings to the marketplace. Networking is a great way to develop new relationships, clients, and friends, and we hope this segment is of value to our readership. In this issue, our spotlight segment features one of Datatrend’s Network Services customers, Clean Harbors.

Clean Harbors is the leading provider of environmental, energy and industrial services throughout North America. The Company serves a diverse customer base, including a majority of the Fortune 500 companies, thousands of smaller private entities and numerous federal, state, provincial and local governmental agencies. Through its Safety-Kleen subsidiary, Clean Harbors also is a premier provider of used oil recycling and re-refining, parts cleaning and environmental services for the small quantity generator market.

Within Clean Harbors Environmental Services, the Company offers Technical Services and Field Services. Technical Services provide a broad range of hazardous material management and disposal services including the collection, packaging, transportation, recycling, treatment and disposal of hazardous and non-hazardous waste. Field Services provide a wide variety of environmental cleanup services on customer sites or other locations on a scheduled or emergency response basis.

Within Clean Harbors Energy and Industrial Services, the Company offers Industrial Services and Oil & Gas Field Services. Industrial Services provide industrial and specialty services, such as high-pressure and chemical cleaning, catalyst handling, decoking, material processing and industrial lodging services to refineries, chemical plants, pulp and paper mills, and other industrial facilities. Oil & Gas Field Services provide exploration, surface rentals, solids control, and environmental services to the energy sector serving oil and gas exploration, production, and power generation.

Clean Harbors’ Safety-Kleen subsidiary is a leading North American used oil recycling and re-refining, parts cleaning and environmental solutions company for small quantity waste generators, and has the largest re-refining capabilities of used oil into base and blended lube oils. Safety-Kleen provides a broad set of environmentally-responsible products and services that keep businesses in balance with the environment.

Headquartered in Norwell, Massachusetts, Clean Harbors has waste disposal facilities and service locations throughout the United States and Canada, as well as Mexico and Puerto Rico. For more information about Clean Harbors, click here.

Be In the Spotlight

Any Datatrend client or partner can drop us a line (trendsetter@datatrend.com) and include an authorized communication that overviews your business, target market, and related information, plus your company logo. We will happily put all such qualified submissions into a review queue with the plan of publishing one each quarter.

[ back to top ]


BlueStripe’s Application and Transaction Monitoring Product is First to Track Individual Transactions Without Code Changes, APIs, or Appliances in FactFinder v7.0

Written by Warner Schlais, President of Technology Services, Datatrend Technologies, Inc.

BlueStripe Software announces the availability of FactFinder v7, the first transaction performance and availability monitoring solution to trace individual transactions end-to-end across enterprise application systems without the need to add or write any new application code. The latest version of BlueStripe’s award-winning application and transaction monitoring product includes TransactionLink™ technology, which automatically follows every transaction wherever it goes across the infrastructure.

FactFinder tracks transactions across web tiers, application servers, middleware, databases, mainframes, private and hybrid cloud. The transaction monitoring tool follows slow transactions right to the problem component, then drills down the server stack to find the true root cause of any performance or availability problem. Common problems include resource depletion by other applications, storage bottlenecks, server configuration errors, and even problems caused by other management tools.

A key component of FactFinder v7 is the breakthrough TransactionLink tracking technology, which can support millions of complex transaction requests per day. With TransactionLink, IT Operation teams can understand the true transaction path for any individual transaction without the burden of intrusive tags, code changes or network devices. TransactionLink intelligently reads the unique characteristics of each transaction – similar to reading the transaction’s genetic code – and uses this to automatically identify and follow each individual transaction across tiers and into the server stacks to solve problems faster.

Diagnose the root causes of outages in complex systems—trace transactions across each tier to find the bottleneck, then drill into the application platforms and server to find the problem.

In addition to TransactionLink, FactFinder v7 delivers automatic identification of bottlenecks, visibility into complex middleware to measure true round-trip response times, and business payload tracking to capture important details within a transaction request. Specific capabilities include:

  • Transaction Explorer with Automatic Bottleneck Detection
    Provides views into a specific individual transaction path and shows the exact components and systems it crossed. FactFinder automatically identifies the component or network connection where the most time was spent.
  • Support for Complex Messaging Middleware
    Adaptable to any communications protocol, FactFinder traces entire transactions everywhere they go, even across message queuing systems like MQ Series and TIBCO Rendezvous, hybrid cloud environments, and enterprise architectures including SOA & Web Services. FactFinder measures and reports the true round-trip response time of these asynchronous requests.
  • Business Transaction Payload Tracking
    Captures all information within a transaction request including business-relevant information like an IP address or user name, as well as diagnostically useful information such as which SQL statement was sent to a database and what the result code was.

“IT organizations must understand the risks associated with their mission critical applications, from knowing the systems that production applications depend on to seeing the performance of new applications as they roll out,” said Donna Scott, Gartner vice president and distinguished analyst. “Real-time transaction visibility is a great way for IT Operations teams to avoid availability surprises by knowing what these dependencies are and, when the inevitable occurs, finding and fixing problems quickly.”

What’s New in FactFinder™ v7

BlueStripe’s Transaction Performance and Availability Monitoring solution, FactFinder, enables IT teams to monitor transactions throughout the application lifecycle, to identify and fix problems as they work to test, certify, deploy, and manage all of their applications. This new version of FactFinder is centered on major breakthroughs in automatically identifying and tracking transactions end-to-end.

TransactionLink: Automatic Business Transaction Tracing

FactFinder v7 includes a new BlueStripe innovation called TransactionLink, which automatically and accurately tracks transactions end-to-end. TransactionLink works at the scale and complexity of enterprise environments, supporting millions of transaction requests per day.

  • TransactionLink is completely automatic, enabling IT support teams to trace transactions without the burden of intrusive tags, code changes, or network devices.
  • Every transaction has unique characteristics. TransactionLink intelligently reads these characteristics—like reading the transaction’s genetic code—and uses that genetic code to identify and follow the transaction across tiers.

New Individual Transaction Trace Explorer with Automatic Bottleneck Detection

In the new FactFinder Trace Explorer, IT Operations and support teams can see where a specific transaction instance went, which components and systems it used, and where it spent time. The Trace Explorer also shows how much time was spent processing within each component versus crossing the network between components. FactFinder Automatic Bottleneck Detection speeds up troubleshooting by pointing out the component in which most of the response time was spent.

Support for Complex Messaging Middleware

FactFinder can trace entire transactions, everywhere they go, even across message queuing systems (like MQ Series and TIBCO Rendevous) and complex architectures like SOA, Web services, and hybrid cloud. FactFinder also accurately reports the true round-trip response time of asynchronous requests. For example, in an MQ architecture, FactFinder reports the true response time from the first PUT to the delivery of results after the corresponding GET.

Advanced Transaction Payload Tracking

FactFinder can capture any information in a transaction request, either business-relevant info (like the IP address or user name) or diagnostically useful information (like which SQL statement was sent to a database and what the result code was). For some architectures, like message queuing or Web services, the real transaction is “hidden” within the request payload. For example, FactFinder can show a CICS administrator how a CICS request “hidden” within an MQ “PUT” performed, as well as gathering the CICS return codes or errors. Transaction payload tracking can be extended to support protocols on any system, even proprietary protocols.

Remember, Datatrend offers a variety of services surrounding BlueStripe solutions ranging from purchase, integration, upgrades, and more. For more information, please contact Warner Schlais, President of Technology Services, at 952-563-2193, or warner.schlais@datatrend.com.

[ back to top ]


IBM PureData System for Analytics, Powered by Netezza
Differentiating this true data warehouse appliance from the competition

Written by Adam Ronthal, Sr. Technical Marketing & Competitive Analyst
PureData, Netezza, and Big Data Solutions
IBM

The term “appliance” is liberally used by many vendors in the big data space these days. It seems that almost everyone has latched onto the term and it is being used not only to define data warehousing and analytic product offerings, but also to subtly (or not so subtly in some cases) set customer expectations about the underlying ease of deployment and ongoing cost of ownership associated with the product.

After all, everyone knows that appliances are easy to use. You simply plug them in, give them bread, and they spit out toast, right?

Indeed, with respect to Oracle’s “appliance” offerings, IBM’s Steve Mills was recently quoted on ZDNet to say:

“It’s easy to throw a lot of stuff into a crate, ship it as one thing and say it’s an integrated product”…

But that does not provide appliance simplicity, ease of use, and value.

Wikipedia defines a data warehouse appliance as “an integrated set of servers, storage, operating system(s), DBMS and software specifically pre-installed and pre-optimized for data warehousing.”

While essentially correct, this definition does not convey some of the key aspects of a true appliance. Let’s take a look at some general appliance truisms, or what I have come to refer to as the Golden Rules of Appliances:Neteza

• Appliances are Plug and Play
• Appliances are purpose-built
• Appliances are easy to use

People like appliances because they provide rapid time to value, do what they are supposed to do (and little else), and do it all with a keep it simple mindset. (Translation: IT can spend time adding business value instead of spending time on tedious system integration work.) If we accept the above points as inherent to a solution’s “applianceness” it provides an excellent framework by which to evaluate the appliance claims of various solution providers. The key players in this space are Oracle Exadata, EMC Greenplum, Teradata, HP, ParAccel, and of course, IBM.

Probably the single most important differentiating factor for an appliance versus an appliance imposter is ease-of-use. True appliances take things that were previously difficult and make them easy. Case in point, IBM PureData System for Analytics (Netezza)’s Zone Map functionality, which renders the table partitioning and indexes required by most other solutions unnecessary by automatically telling the appliance where not to look for data. Features like compression are built in – and due to the field programmable gate arrays (FPGAs) inherent in PureData System for Analytics (Netezza)’s technology — performance enhancers.

In keeping with the simplicity message, there are not dozens or hundreds of things that must be tweaked to ensure good performance. Rather, following 6-12 general best practice guidelines ensures excellent performance nearly all the time.

Oracle Exadata still has all of the complexity associated with Oracle RAC and the constant tuning and optimization that requires small armies of DBAs to keep an Oracle environment running smoothly. When there are hundreds of knobs and buttons required to tune an environment for optimal performance it doesn’t feel very appliance-like.

Teradata is also notorious for requiring lots of resources or expensive professional services to keep their environments up and running. One large insurance company I have worked with runs PureData System for Analytics (Netezza) solutions side by side with Teradata as a shared service. Guess which one costs them more to run for equivalent performance? Guess which environment routinely delivers 99.5% of the well over a million queries that are run each month in less than 60 seconds?

As to the smaller players, take a look at the EMC Greenplum admin guide. It is readily available for download with a bit of Google time and well over a 1000 pages long – nearly twice as large as IBM PureData System for Analytics (Netezza’s) admin guide. Manual configuration of high availability, storage configuration and management of each host segment adds significant complexity to this solution. And compression is not something to be undertaken lightly as it is a clear performance detractor for Greenplum.

HP Vertica is largely unproven for real world environments and while columnar databases are the darlings of the analyst world right now, the IBM PureData System for Analytics (Netezza) FPGA technology provides all of the benefits of columnar databases without any of the drawbacks. Vertica environments are complex to get up and running and provide a steep learning curve for DBAs.

ParAccel is probably the closest to a true appliance in the ease-of-use category, but still requires significantly more administrative overhead than what we would like to see in a true appliance. Indeed, ParAccel recommends that customers bolt on SAN storage for optimal performance, thus adding to the overall management complexity of the environment.

To see how the key vendors stack up in living up to the three golden rules of appliances, let’s take a look at the scorecard:

Vendor Plug and Play Purpose Built Easy to use Score
IBM Netezza X X X 3
Teradata X X 2
Oracle Exadata X 1
ParAccel X 1
EMC Greenplum X 1
HP Vertica X 1

Clearly only IBM truly understands the implications of building a true appliance.

Engineered from the ground up for one purpose, with hardware, software and storage all built to work together, the analytic appliance market that was defined by Netezza in the early 21st century continues to be led by IBM PureData System for Analytics (powered by Netezza) today.

For additional information on how IBM’s PureData System for Analytics powered by Netezza can help your company’s analytics strategy, contact Charlie Cox, President of Technology Infrastructure Solutions at Datatrend, charlie.cox@datatrend.com.

[ back to top ]


High Speed Internet Access in Hospitality

Written by Bill Roberts, President of Network Services, Datatrend Technologies, Inc.

At one time hotel guests viewed Internet access as a nice amenity to have, if they could get it. Now they are practically demanding it, and many are basing their decisions on where to stay on the quality of HSIA (High Speed Internet Access) they can obtain. Guests are also putting more demands on the hotel’s network. They are carrying more and more mobile devices. They are also not just checking emails; some are streaming videos, downloading music, and updating their social networks. The bandwidth demand keeps growing.

Hotels are making decisions to make large investments to improve the quality of HSIA they can offer their guests. A critical focus of this investment is the upgrade of the existing network infrastructure. After determining how much bandwidth will be needed to be brought into the hotel, into the guestrooms, and into meeting and conference rooms, a complete design of the new network infrastructure to support that bandwidth is required.

  • Identification of building layout and construction
  • Identification of potential difficulties for installation
  • Complete RF Survey to determine location of Access Points and other HSIA devices
  • Cable plant design – Fiber optic, Category 5E or Category 6 cable requirements

The Network Infrastructure design should not only be limited to HSIA considerations. The opportunity at this time should be taken to prepare for the installation or upgrade of other services including:

  • Voice-over-IP
  • Energy Management Systems
  • Digital video on-demand
  • IPTV
  • Video Conferencing

Building a quality network infrastructure for the future that leverages all of these technologies, helps justify the more immediate investment required for HSIA.

For more information on HSIA network infrastructure assessments and installations from Datatrend, click here or contact Bill Roberts at bill.roberts@datatrend.com. You can also consult with a Datatrend representative by calling 800-367-7472.

[ back to top ]


Tech Tip: BMC ADDM v9.0 – Upgrade or Migrate?

Written by Mark Neuman, Technical Services Project Manager

Datatrend Technologies, Inc.Last month I described the new function and features that can be found in ADDM version 9.0, and at the end of the tip I noted that because the change also involved a new version of the underlying OS that for the most part an upgrade path was not available. This month I will describe in better detail the choices you have, upgrade or migration, and the consequences of those choices along with my vision of the procedure for the migration process. In both cases what we are attempting to accomplish is to bring your ADDM to the 9.0 level while keeping your scan data, and more importantly, your configuration data intact. If that is less of a concern, i.e., the configuration would be easy to re-establish and your data is just a scan away, then a fresh install is always an option.Yes there is an upgrade, but you will be limited after that.While an upgrade is available, there are limits on what you can do within that environment. First, because the upgrade will not upgrade the underlying LINUX RedHat operating system from version 5 to version 6, you will not be able to set up or scan any devices using IPv6. Second, while you can perform an upgrade from ADDM version 8.3.X to ADDM version 9.0, you will not be able to upgrade any further, that is, you will not be able to upgrade to ADDM version 9.1 and beyond. Third, if you are running any 32 bit versions of ADDM the upgrade path is not available as ADDM, version 9.0 and beyond is ONLY available in 64 bit.The upgrade procedure is very similar to all of the previous ADDM upgrades, but contains a few additional steps. Above all, I strongly recommend that a snapshot be taken prior to this upgrade so you can recover from any issues, should they occur. If the appliances are running under VMware, it is even better to take a VMware snapshot prior to commencing the upgrade procedure as the recovery will be faster and simpler. As noted, the upgrade process itself is similar to previous upgrades, in that you will run the upgrade script that will take care of the upgrade itself. However there some additional steps and considerations to take into account for and during this upgrade as described below.

  • Even though an upgraded appliance cannot scan IPv6, the changes needed to support IPv6 are factored into the appliance and you will need to take into account the effect of these changes.
    • The regex used in scan ranges and credential ranges have been changed (wildcards have been removed) so you may need to re-factor those ranges. The upgrade script will attempt to make most of these changes, but you will need, at the very least, to verify the configuration of these elements after the upgrade.
    • In order to support IPv6 and SNMP managed devices changes were made to the underlying network model, taxonomy, and datastore, so any patterns that refer to the old model will need to be updated. During the upgrade these patterns will be flagged for review.
  • Set the ZONE variable in the /etc/sysconfig/clock file to prevent the timezone configuration from being overwritten.
  • Look at the postupgrade_9.0_TODO.log for additional customized completion tasks as a result of your upgrade. Some common tasks that will be noted in the log file are:
    • Windows Proxy compatibility if you are using older windows proxies.
    • Activate the new TKU that the upgrade process installs. You must de-activate any other TKU package that came across with the upgrade first.
    • Review all patterns flagged by the upgrade. These patterns may use IP wildcard, or data structures that have changed and are referenced in the pattern.
    • After verifying any Taxonomy extensions that you may have, you may need to re-import them.
    • Review any export mapping sets as there have been changes to those patterns. This is especially true if you have either changed the default mappings, hopefully by copying the original rather than making changes directly in the TKU pattern, or have created new mapping patterns.
    • Changes have been made to the base discovery scripts, so you will need to review these changes and re-apply any customization that you may have implemented in your previous version. If you are in a tightly controlled security environment, you will want to review the “commands” that are used in these scripts for changes, as you may need to make changes to any privilege upgrade mechanism.
  • If you are using a scanner/consolidator configuration, you need to stop all scans, allow the consolidators to complete, and then upgrade ALL appliances. After which you will restart the consolidator and allow the startup process to complete its tasks, then restart each scanning appliance.

A better option, if you have the resources, is the migration option so you can “get it over with”. As you can see, there are limits to the upgrade choice of going from ADDM version 8.3.x to ADDM version 9.0, thus the recommended approach is to perform the migration upgrade. The largest downside to the migration install is that you will need to support the equivalent of a new installation of ADDM during the migration process; however, after the migration is completed you can release the resources from the “old” ADDM 8 appliance back to support services. Something to consider is that this may be the perfect time to allocate better resources, more CPU, disk, or memory, to your ADDM infrastructure if you are reaching the limits of performance on you current appliance. To this end I will always recommend physical hardware for any larger ADDM reporting system or consolidation appliance, and yes, you can install directly on most x86 hardware systems by using the “Kickstart install”, so you are not limited to a VMware only based appliance.One advantage to performing a migration is that you do not disturb your original appliance should you encounter an issue with the migration procedure on the “new” ADDM appliance. You should however, not be performing any additional scans on the current appliance where you would want the data to be present on the new ADDM image. That is because upon the completion of the migration, that additional data has no way of getting over the new appliance after the “migration snapshot” has been taken.The migration procedure consists of:

  • Standing up a new ADDM version 9.0 appliance.
    • You will want to use the new User Interface (UI) tools to set up any additional disks that may be needed at this time.
    • Depending on security considerations, such as Access Control Lists that may be in effect, you may want to stand this appliance up on the same IP as the original ADDM 8.3.x appliance. If so, you will need to change the IP of that original ADDM 8.3.x appliance prior to standing up the new ADDM 9.0 appliance.
  • Downloading, from the new ADDM 9.0 appliance, the migration script that will be run on the original ADDM 8.3.x appliance that will take the snapshot data and save it on the “new” ADDM 9.0 appliance.
    • The migration script can automatically transfer the data from the original appliance to the correct location on the new ADDM 9.0 appliance, saving you the step of manually moving those files.
  • Using the UI, perform a “restore”; this will take a long time, and will end with a restart of the appliance.
  • Look at the postmigrate_9.0_TODO.log for additional customized completion tasks as a result of your migration. Some common tasks that will be noted in the log file are:
    • Conversion of any disabled regex/wildcard credentials.
    • Windows Proxy compatibility if you are using older windows proxies.
    • Activate the new TKU that the migration process installs. You must de-activate any other TKU package that came across with the migration first.
    • Review all patterns flagged by the migration. These patterns may use IP wildcard, or data structures that have changed and are referenced in the pattern.
    • After verifying any Taxonomy extensions that you may have, you may need to re-import them.
    • Review any export mapping sets as there have been changes to those patterns. This is especially true if you have either changed the default mappings, hopefully by copying the original rather than making changes directly in the TKU pattern, or have created new mapping patterns.
    • Changes have been made to the base discovery scripts, so you will need to review these changes and re-apply any customization that you may have implemented in your previous version. If you are in a tightly controlled security environment, you will want to review the “commands” that are used in these scripts for changes, as you may need to make changes to any privilege upgrade mechanism.
  • If you have not changed IPs prior to this point, and wish to do so, now is the time.
  • If you have a scanner/consolidation configuration and have upgraded all of your appliances, you can now start discovery on the consolidation appliance followed by discovery on the scanning appliances.
  • After testing you can stop the “old” ADDM 8.3.x appliance and either return those resources back to your support services, or install ADDM 9.0 on that VMware, or server, and use those resources to support the next migration.

A few notes and observations: If you do not have a lot of “extra” resources, you can perform a “rolling” migration. That is where you stand up the first ADDM 9.0 appliance and perform the migration into that appliance. Then install the ADDM 9.0 image onto the “old” appliance and migrate the next ADDM 8.3.x appliance image onto that image. This can be followed by the next image in turn until all ADDM appliances have been migrated where you can then release the resources from the last one back to support services. This presupposes that the “size” of each ADDM appliance is the same, however, in most cases of a scanner/consolidator environment the scanning appliances will be roughly the same, with the consolidator being much larger. So, while you can use the “rolling” concept for the scanners, you will need to do a parallel migration for the consolidator.Remember to copy the snapshot files off of the system after creating said snapshot. In order to restore a snapshot, should you need to recover, you will need the appliance to be the same version as when the snapshot was taken. Thus you will need to re-install the base ADDM image, and if you have not copied the files off the appliance, you will overwrite them during this install.Because this is a major change to the ADDM product there are more areas that could cause the upgrade or migration to fail, thus careful planning is the key to success. Above all, read the upgrade or migration instructions through very carefully before even starting, and follow them exactly during the process.If you are interested in upgrading or migrating to the latest version of BMC ADDM, or have questions for us, please contact Warner Schlais, President of Technology Services, at 952-563-2193, or warner.schlais@datatrend.com.

[back to top]