Showing posts with label analytics. Show all posts
Showing posts with label analytics. Show all posts

Friday, April 10, 2020

IBM Db2 Analytics Accelerator: Time to Upgrade?


This post is about the IBM Db2 Analytics Accelerator, sometimes (and hereinafter) referred to as IDAA.
First of all, for those who don’t know, let’s start with what it is. IDAA is a high-performance component, typically delivered as an appliance, that is tightly integrated with Db2 for z/OS. It delivers high-speed processing for complex Db2 queries to support business-critical reporting and analytic workloads.  

The general idea is to enable HTAP (Hybrid Transaction Analytical Processing) from the same database, on Db2 for z/OS. IDAA stores data in a columnar format that is ideal for speeding up complex queries – sometimes by orders of magnitude.

Now there is a lot more to IDAA, but we won’t cover it here in today’s blog. If you want more details, I direct you to the following links:


Anyway, the real purpose of today’s blog entry is to alert IDAA users that you need to be aware of some recent and upcoming support and version issues.


IDAA Version 7

The current version of IDAA is V7.5; it was announced October 15, 2019 and released for GA December 6, 2019. But many customers are not there yet. This is not surprising given that it has only been about 4 or 5 months since it has become available. Nevertheless, it offers an abundance of great functionality and usability improvements. At the top of the list are greater scalability and improved synchronization.

Because the data in an IDAA is stored separately from the data in the primary Db2 for z/OS system, when the data is changed in Db2 for z/OS it must be migrated to the IDAA. This causes latency, where the data differs between the two systems. Of course, this is not ideal.

Well, the latest and greatest iteration of IDAA has greatly improved things with Integrated Synchronization, which provides low-latency data coherency. Db2 12 for z/OS (FL 500) delivers the Log Data Provider, which to capture changes and funnel them to IDAA. It is quick, uses very little CPU, and is zIIP-enabled. This greatly improves the latency between Db2 for z/OS data and IDAA data, to the point of it becoming mostly irrelevant.

Additionally, V7 was the first version of IDAA to allow deployment on IFLs, instead of on a separate physical piece of hardware. This means you can accelerate Db2 for z/OS queries completely on the mainframe. And V7.5 expands the scalability of IFLs.

Important Information for Laggards

Perhaps the most important piece of information in today’s blog post though is for those of you who are still running older versions of IDAA… specifically, V4. The end of service date for IDAA V4 is imminent – April 30, 2020 – and there will be no extension of this date. So if you are still on V4, it is time to upgrade!

Fortunately, you can upgrade to IDAA V5 at no cost. Sure, V5 is not the most current version of IDAA, but IBM has not issued an end of service (EOS) date for it yet. The probable EOS date is tentatively set for the first half of 2023 (which is the same for the IBM PureData System for Analytics N3001 on which this earlier IDAA is based.

Today’s Bottom IDAA Line

If you are looking for an efficient, cost-effective query accelerator for your complex Db2 queries you should look into IDAA V7.5.

And if you are still running V4, update soon (by the end of the month?) to avoid running on an out of service version of IDAA.

Tuesday, July 16, 2019

Proud to be an IBM Champion

Just a quick post today about the IBM Champions program, which if you haven't heard about, is a special program run by IBM to recognize and reward non-IBM thought leaders for their work associated with IBM products and communities. 

IBM publishes the list of IBM Champions annually and the title is valid for one year. So, champions must be nominated each year to maintain their status.

I want to thank IBM for running such a wonderful program and for all they have done to help recognize those of us in the trenches using IBM's technology. I have been named an IBM Champion for Data and Analytics again this year... for the 10th time. So IBM bestowed upon me this Acclaim badge:


As an IBM Champion I have had the opportunity to interact with IBM folks and with other IBM Champions at events, webinars, and in person, and it has definitely helped to enrich my professional life.

Although the majority of IBM Champions focus on data and analytics, the program is not just for data people! IBM names champions in each of the following nine categories: 
  • Data & Analytics
  • Cloud 
  • Collaboration Solutions 
  • Power Systems 
  • Storage 
  • IBM Z 
  • Watson IoT 
  • Blockchain 
  • Security 
If you are, or know of, somebody who should be an IBM Champion, you can nominate them here: https://developer.ibm.com/champions/.

Thanks again, IBM... and congratulations to all of this year's IBM Champions.

Monday, November 02, 2015

IBM Insight 2015 Wrap-Up

Last week I attended the IBM Insight conference and blogged about the first few days of the conference here at http://db2portal.blogspot.com/2015/10/an-update-from-ibm-insight-2015.html… and I promised to blog about the remainder of the conference, so here is a synopsis of the highlights.


On Wednesday, the focus of the general session was on IBM’s acquisition of The Weather Company’s technology.  The deal calls for IBM to acquire The Weather Company’s B2B, mobile and cloud-based web properties, including WSI, weather.com, Weather Underground and The Weather Company brand. IBM will not be acquiring The Weather Channel television station, which will license weather forecast data and analytics from IBM under a long-term contract. IBM intends to utilize its newly acquired weather data in its Watson platform.

The deal is expected to close in the first quarter of 2016. Terms were not disclosed.

You can read all about the acquisition in this IBM press release

I spent some of my time at Insight this year learning more about dashDB and it is a very interesting technology. Marketed as data warehousing in the cloud, IBM touts four use cases for dashDB: standalone cloud data warehouse, as a store for data scientists, for those implementing a hybrid data warehouse, and for NoSQL analysis and rapid prototyping.
IBM promotes simplicity, performance, analytics on both traditional and NoSQL, and polyglot language support as the most important highlights of dashDB. And because it has DB2 BLU under the covers IBM dashDB not only super-compresses data, but it can operate on that data without necessarily decompressing it.
Additionally, a big theme of the conference was in-memory technology, and dashDB sports CPU cache capabilities. In fact, I heard several folks at the conference say some variation of “RAM is too slow”… meaning that CPU cache is faster and IBM is moving in that direction.
The bottom line for dashDB is that it offers built-in high availability and workload management capabilities, along with being in-memory optimized and scalable. Worth a look for folks needing a powerful data warehousing platform.
For you DB2 for z/OS folks, IDAA was a big theme of this year’s Insight conference. The latest version, V5.1, adds advanced analytics capabilities and in database transformation, making your mainframe queries that can take advantage of the accelerator faster than ever.
Apache Spark was another pervasive topic this year. It was talked about in multiple sessions and I even had the opportunity to play with it in a hands-on lab. The big news for z folks is that IBM is bringing out a version of Spark for the mainframe that will run on z/OS – it is already supported on zLinux.
Of course, I attended a whole slew of DB2 sessions including SQL coding, performance and administration presentations. Some of the highlights include DB2 11 for LUW being announced, several discussions about dark data, and a lot of information about IBM's Big SQL and how it can be used to rapidly and efficiently access Hadoop (and other unstructured) data using SQL.
I live-tweeted a bunch of highlights of those sessions, too. Indeed, too many to include here, if you are interested in catching everything I have to say about a conference, keep reading these blog posts, of course, but you should really follow me on Twitter, too at http://twitter.com/craigmullins
I also had the honor of delivering a presentation at this year's conference on the changes and trends going on in the world of DB2 for z/OS. Thanks to the 70 or so people who attended my session - I hope you all enjoyed it and learned something, too!
As usual, and well-you-know if you've ever attended this conference before, there was also a LOT of walking to be done. From the hotel to the conference center to the expo hall to lunch to the conference center. But at least there were some signs making light of the situation this year! 
There was a lot of fun to be had at the conference, too. The vendor exhibition hall was stocked with many vendors, big and small, and it seems like they all had candy. I guess that’s what you get when the conference is so close to Halloween! The annual Z party at the House of Blues (for which you need a Z pin to get in – this year’s pin was orange) was a blast and the Maroon 5 concert courtesy of Rocket Software was a lot of fun, too.

If you are looking for a week of database, big data, and analytics knowledge transfer, the opportunity to chat and connect with your peers, as well as some night-time entertainment, be sure to plan to attend next year’s IBM Insight conference (October 23 thru 27, 2016 at the Mandalay Bay in Las Vegas).

Monday, October 26, 2015

An Update from IBM Insight 2015

The annual IBM Insight conference is being held this week in Las Vegas, as usual at the Mandalay Bay Resort and Conference Center. And I have the good fortunate to be in attendance.

If you follow me on Twitter (http://www.twitter.com/craigmullins) I am live tweeting many of the sessions I am attending at the conference. But for those of you who are not following along on Twitter, or just prefer a summary, here’s a quick overview of Monday’s highlights.

The day kicked off with the general session keynote which was delivered by a combination of Jake Porway (of DataKind), IBMers and IBM customers. The theme of the keynote was analytics and cognitive computing. The emphasis, in my opinion, of the event has kind of shifted from the data to what is being done with the data… in other words, the applications. And that is interesting, because the data is there to support the applications, right? That said, I’m a data bigot from way back, so it was a bit app-heavy for me.

That said, there were some insightful moments delivered during the keynote. Bob Picciano, Senior VP of IBM Analytics, kicked things off by informing the audience that true insight comes from exploiting existing data, dark data, and IoT data with agility driven by the cloud. That’s a lot of buzzwords, but it makes sense! And then executives from Whirlpool, Coca-Cola, and Twitter were brought out to talk about how they derive insight from their data and systems.

Perhaps the most interesting portion of the general session was the Watson discussion, led by Mike Rhodin, Senior VP, IBM Watson. We learned more about cognitive systems and how they get more valuable over time as they learn, which makes them unique in the world of computers and IT. IBM shared that there are more than 50 core technologies used by IBM Watson to deliver its cognitive computing capabilities. It was exciting to learn about systems that reason, never stop learning and drive more value over time.
Additional apps were discussed that let us learn about the various ways to choose wine, that nobody starts thinking about ice cream early in the week and that when the weather changes women buy warmer clothes; men buy beer and chips. You kind of had to be there, I guess!

Of course, this was only the first session of a long day. Additional highlights of the day included a high-level overview of the recently announced (but not yet released) DB2 12 for z/OS, the features that should be in a next generation database, and Gartner’s take on the state of big data and analytics. Let’s briefly address each of these one by one.
Firstly, DB2 12, which will be available in the ESP (early support program) in March 2016. There are a lot of nice new features that will be available in this new version. We’ll see a lot more in-memory capabilities which will speed up queries and processing. Buffer pools can be up to 16 TBs, even though today’s z systems can support only 10 TBs – IBM is planning for the future with that one!

And we’ll continue to see the blurring of the lines between static and dynamic SQL. How? Well, we’ll get RLF for static SQL and plan stability for dynamic SQL in DB2 12. IBM claims that we’ll be able to achieve up to 360 million txns/hour with DB2 12 for z/OS using a RESTful web API. Pretty impressive.

And there will be no skip-level migration for DB2 12... You have to migrate thru DB2 11 to get to 12.

OK, what about the features that should be in a next generation database? According to IBM a next gen DBMS should:
  • Deliver advanced in-memory technology
  • Be fast for both transactional and analytic workloads
  • Provide scalable performance
  • Be available, reliable, resilient, and secure
  • Be simple, intelligent and agile
  • And be easy to deploy and cloud-ready

Sounds about right to me!                                                                                        

And finally, let’s briefly take a look at the some of the Gartner observations on big data and analytics. The Gartner presentation was delivered by Donald Feinberg, long-time Gartner analyst on the topic of data and database systems. First of all, Feinberg rejects the term “big data” saying there is no such thing. It is all just data. He went on to claim that “big data” is perhaps the most ambiguous term out there, but it is also the most searched term at Gartner!

Feinberg also rejected the term data lake, saying “There is no such thing as a data lake, it is just a place to dump data.” He warned that it will  come back to bite organizations in a few years if they do not take the time to manage, transform, secure, etc. the data in the lake, turning it into a data reservoir, instead.

He also made the very interesting observation that BI/analytics was the number 1 priority for CIOs on Gartner’s annual CIO priorities survey and that it has been in that slot for 8 years running. But if analytics was really that much of a priority why haven't they gotten it done yet?

Of course, a lot more happened at IBM Insight today – and many more things were discussed. But I don’t want this blog post to become too unwieldy, so that is all I’m going to cover for now.

I’ll try to cover more later in the week as the conference progresses.


Wednesday, November 06, 2013

IBM Information on Demand 2013, Tuesday

The second day of the IBM IOD conference began like the first, with a general session attended by most of the folks at the event. The theme of today's general session was Big Data and Analytics in Action. And Jake Porway was back to host the festivities.

The general session kicked off talking about democratizing analytics, which requires putting the right tools in people's hands when and where they want to use them. And also the necessity of analytics becoming a part of everything we do.

These points were driven home by David Becker of the Pew Charitable Trust when he took the stage with IBM's Jeff Jonas Chief Scientist and IBM Fellow. Becker spoke about the data challenges and troubles with maintaining accurate voting rolls. He talked about more than 12 million outdated records across 7 US states. Other issues mentioned by Becker included deceased people still legitimately registered to vote, people registered in multiple states, and the biggest issue, 51 million citizens not even registered.

Then Jonas told the story of how Becker invited him to attend some Pew meetings because he had heard about Jonas' data analytics expertise. After sitting through the first meeting Jonas immediately recognized the problem as being all about context. Jonas offered up a solution to match voter records with DMV records instead of relying on manual modifications.

The system built upon this idea is named ERIC, short for the Electronic Registration Information Center. And Pew has been wowed by the results. ERIC has helped to identify over 5 million eligible voters in seven states. The system was able to find voters who had moved, not yet registered and those who had passed away.

"Data finds data," Jonas said. If you've heard him speak in the past, you've probably heard him say that before, too! He also promoted the G2 engine that he built and mentioned that it is now part of IBM SPSS Modeler.

This particular portion of the general session was the highlight for me. But during this session IBMers also talked about Project NEO (the next generation of data discovery in the cloud), IBM Concert (delivering insight and cognitive collaboration), and what Watson has been up to.

I followed up the general session by attending a pitch on Big Data and System z delivered by Stephen O'Grady of Redmonk and IBM's Dan Wardman. Stepehen started off  the session and he made a couple of statements that were music to my ears. First, "Data doesn't always have to be big to lead to better decisions." Yes! I've been saying this for the past couple of years.

And he also made the observation that since data is more readily available, businesses should be able to move toward evidence-based decision-making. And that is a good thing. Because if instead we are making gut decisions or using our intuition, the decisions simply cannot be as good as those based on facts. And he backed it up with this fact: organizations  using analytics are 2.2x more likely to outperform their industry peers.

O'Grady also offered up some Big Data statistics that are worth taking a look at --> here

And then Wardman followed up with IBM's System z information management initiatives and how they tie into big data analytics. He led off by stating that IBM's customers are most interested in transactional data instead of social data for their Big Data projects. Which led to him to posit that analytics and decision engines need to exist where the transactional data exists -- and that is on the mainframe!

Even though the traditional model moves data for analytics processing, IBM is working on analytics on data without moving it. And that can speed up Big Data projects for mainframe users.

But my coverage of Tuesday at IOD would not be complete without mentioning the great concert sponsored by Rocket Software. Fun. performed and they rocked the joint. It is not often that you get to see such a new, young and popular band at an industry conference. So kudos to IBM and Rocket for keeping things fresh and delivering high quality entertainment. The band performed all three of their big hits ("Carry On", "We Are Young", and "Some Nights", as well as a bevy of other great songs including a nifty cover of the Stones "You Can't Always Get What You Want."

All in all, a great day of education, networking, and entertainment. But what will Wednesday hold? Well, for one thing, my presentation on Understanding The Rolling 4 Hour Average and Tuning DB2 to Control Costs.

So be sure to stop by the blog again tomorrow for coverage of IOD Day Three!

Tuesday, November 05, 2013

IBM Information on Demand 2013, Monday

Hello everybody, and welcome to my daily blogging from the IOD conference. Today (Monday, 11/4) was my first day at the conference and it started off with a high octane general session. Well, actually, that's not entirely accurate. It started off with a nice (somewhat less than healthy) breakfast and a lot of coffee. But right after that was the general session.

The session was emceed by Jake Porway, who bills himself as a Data Scientist. He is also a National Geographic host and the founder of DataKind. Porway extolled the virtues of using Big Data for the greater good. Porway says that data is personal and it touches our lives in multiple ways. He started off by talking about the "dark ages" which to Porway meant the early 2000s, before the days of Netflix, back when (horror of horrors) we all went to Blockbuster to rent movies... But today we can access almost all of our entertainment right over the Internet from the comfort of our sofa (except for those brave few who still trudge out to a red box).

From there Porway went on to discuss how data scientists working in small teams can make a world of difference by using their analytical skills to change the world for the better. Porway challenged the audience by asking us "Have you thought about how you might use data to change the world for the better?" And then he went on to describe how data can be instrumental in helping to solve world problems like improving the quality of drinking water, reducing poverty and improving education.

Indeed, Porway said that he views data scientists as "today's superheroes."

Porway the introduced Robert LeBlanc, IBM Sr. Vice President for Middleware Software. LeBlanc's primary message was about the four technologies that define the smarter enterprise: cloud, mobile, social and Big Data analytics.

LeBlanc stated that the amount of unstructured data has changed the way we think, work, and live. And he summed it up rather succinctly by remarking that we used to be valuable for what we know, but now we are more valuable for what we share.

Some of IBM's customers, including representatives from Nationwide and Centerpoint Energy took the stage to explain how they had transformed their business using IBM Big Data and analytics solutions.

I think the quote that summed up the general session for me was that only 1 in 5 organizations spend more than 50 percent of their IT budget on new projects. With analytics, perhaps that can change!

The next couple of sessions I attended covered the new features of DB2 11 for z/OS, which most of you know was released by IBM for GA on October 25, 2013. I've written about DB2 11 on this blog before, so I won't really go over a lot of those sessions here. Suffice it to say, IBM has delivered some great new features and functionality in this next new release of DB2, and they are already starting to plan for the next one!

I ended the day at the System z Rocks the Mainframe event hosted by IBM at the House of Blues. A good time was had by one and all there as the band rocked the house, some brave attendees jumped up on stage to sing with the band, and the open bar kept everyone happy and well lubricated... until we have to get up early tomorrow for Day Two of IOD...

See you tomorrow!


P.S. And for those interested, Adam Gartenberg has documented the IBM announcements from day one of IOD on his blog here.

Monday, October 24, 2011

IBM Information on Demand 2011: Day Two (#IODGC)

As promised, here is the second of my daily blogs from the IOD conference in Las Vegas. Today it was reported that the attendance at the event was the highest ever for an Information On Demand conference; there are more than 11,500 registered attendees.

The second day of the conference is when things really start humming with news and in-depth presentations. The day kicked off with the general session delivered by a host of IBM executives and customers. Big data, business analytics, and gaining insight into data was the theme of the session. The opening session was peppered with lots of interesteing facts and figures. For example, did you know that 90 percent of the world's data was created in just the last two years? Me neither... but there was no attribution to that nugget of information, so...

Other highlights of the day included the announcement of Cognos Mobile for the iPhone and iPad (a free trial is available on the iTune store)… and the other big product focus of the day was IBM InfoSphere BigInsights, a Hadoop-driven big data solution that can process huge amounts of data very quickly and accurately. For more details on that offering check out my Data Technology Today blog where I cover a customer implementation of this solution.

I also had the opportunity to chat with IBM's Bernie Spang, Director of Marketing, Database Software and Systems. We chatted about various things, starting with the uptake of DB2 10 for z/OS. Earlier in the day it was stated that the uptake of V10 has been faster than for V9 and I asked Bernie why that was. His answer made a lot of sense: skip-level migration support coupled with a clear performance boost out-of-the-box without having to change the database or the apps. I asked if he had metrics on how many customers had migrated, but he didn't have access to that. He said he would get back to me and when he does I will share that information with you all.

We also chatted quite a bit about the recently announ ced DB2 Analytics Accelator. Bernie thinks this is probably the announcement he is most excited about. For those of you who haven't heard about this great piece of technology, it is the second iteration of the Smart Analytics Optimizer (but that name is now dead). The DB2 Analytics Accelerator is built on Netezza technology and can be used to greatly improve the performance of DB2 for z/OS analytical queries without changing the SQL or any application code. There are multiple value points but Bernie pointed out the application transparency and the ability to keep the data on the z platform (no movement required) while accelerating the performance of analytical queries.

IBM views the competition as Oracle Exadata and Teradata, which makes sense. I asked Bernie if there were plans to incorporate the Oracle compatibility features of DB2 LUW in a future iteration of DB2 for z/OS, and he said that made sense. Of course, no one from IBM will commit to future functionality of an as yet to be announced  version, but perhaps Vnext??? (that was me speaking there, no Bernie!)

Then I think I blew his mind when I passed a thought of mine past him. With Netezza being used as a component of an accelerator to improve DB2 analytical processing, has IBM given any thought to using IMS as a component of an accelerator to improve DB2's traditional OLTP processing. Not sure if that is even possible, but it should be worth a research project, right? Especially with IBM announcing IMS 12 at the conference today and the IBM boast that IMS 12 can achieve 61,000 transactions per second. That is impressive! But can the mismatch between relational and hierarchical be overcome in a useful manner to do it?

Finally, we chatted about Informix. As a DB2 bigot I am always at a loss for when to direct people to Informix instead of DB2. It just doesn't sound like something I would do! But Bernie offered a brief overview of Informix time series as something unique that certain customers should look into. An Informix customer uses time series for smart meter management of over 100 million smart meters. A month's worht of data - 4 terabytes - can be loaded and processed in less than 8 hours. And some queries perform from 30x to 60x faster.

OK, even to this DB2 bigot that sounds like an impressive capability. Kudos to Informix.

Finally, I'd like to direct my readers over to the video blog that I am hosting in conjunction with SoftBase Systems. I'll be interviewing DB2 luminaries daily, so tune in there at http://www.softbase.com/blog to view each daily submission!

Until tomorrow...

Wednesday, September 28, 2011

IBM announces Smart Analytics System 5710

Last week (September 2011), IBM announced the Smart Analytics System 5710, which is a database appliance for business intelligence and data analytics targeted at the SMB market. The IBM Smart Analytics System 5710 is based on IBM System x, runs Linux, and includes InfoSphere Warehouse Departmental Edition and Cognos 10 Business Intelligence Reporting and Query.

The announcement of this appliance was somewhat lost in the shuffle of Oracle's marketing blitz for its similar Oracle Database Appliance, also announced last week. But IBM's offering is geared and pre-configured for quick deployment of analytics and business intelligence capabilities.

The IBM Smart Analytics System 5710 is powered by the InfoSphere Warehouse Departmental Edition which is built on a DB2 data server, and features Optim Performance Manager, DB2 Workload Manager, Deep Compression and Multidimensional clustering.

The IBM Smart Analytics System 5710 provides key capabilities of reporting, analysis and dashboards to enable fast answers to key business questions delivered as a cost-effective solution designed for rapid deployment. It allows users to quickly extract maximum insight and value from multiple data sources and deliver a consistent view of information across all business channels.

It also provides cubing services giving users a multidimensional view of data stored in a relational database. Users can create, edit, import, export, and deploy cube models over the relational warehouse schema to perform deeper multi-dimensional analysis of multiple business variables improving both profitability and customer satisfaction. Cubing services also provide optimization techniques to dramatically improve the performance of OLAP queries.

Additionally, the powerful, yet simple, data mining capabilities enable integrated analytics of both structured and unstructured data in the system. Standard data mining models are supported and can be developed via drag and drop in an intuitive design environment.

So what does it cost? For a such a rich collection of software, the starting price is just under $50K. Furthermore, the new offering is part of the IBM Smart Analytics System family, which consists of solutions that span multiple hardware platforms and architectures, including the mainframe (System z).

Tuesday, October 26, 2010

News From The IOD Conference

As usual, IBM has put out a number of press releases in conjunction with the Information On Demand conference, and I will use today’s blog to summarize some of the highlights of these releases.

First of all, IBM is rightly proud of the fact that more than 700 SAP clients have turned to IBM DB2 database software to manage heavy database workloads for improved performance… and, according to IBM, at a lower cost. By that they mean at a lower cost than Oracle. Even though the press release does not state that these SAP sites chose DB2 over Oracle, the IBM executive I spoke with yesterday made it clear that that was indeed the case.

This stampede of SAP customers over to DB2 should not be a surprise because DB2 is SAP’s preferred database software. This might be surprising given that SAP recently acquired Sybase, but IBM notes that seven Sybase runs SAP on DB2.

The press release goes on to call out several customers who are using DB2 with SAP and their reasons for doing so. For example, Reliance Life chose DB2 for the better transaction performance and faster access to data it delivered. Banco de Brasil, on the other hand, was looking to reduce power consumption and storage by consolidating its database management systems.

IBM also announced new software that helps clients automate content-centric processes and manage unstructured content. The highlight of this announcement is IBM Case Manager, software that integrates content and process management with advanced analytics, business rules, collaboration and social software.

IBM also enhanced its content analytics software. NTT DOCOMO of Japan is impressed with IBM’s offering. “With Content Analytics, we have an integrated view of all information that’s relevant to out business in one place regardless of where it’s stored,” said Makoto Ichise, Manager of Information Systems Department Group at NTT DOCOMO.

IBM also enhanced its Information Governance solutions and announced further usage of it InfoSphere Streams product for analyzing medical data to improve healthcare.

So IBM software keeps improving and helping us to better manage our data in a constantly changing world…