The Database Report – April 2007

The first quarter of 2007 is now behind us and it is time to investigate what happened in the database market over the past three months. At a high level we’ve seen acquisitions, a spin-off, new
product releases, and a plethora of patches. So let’s spread some light on the activities that transpired during January, February, and March of 2007.

Daylight Saving Time

Usually daylight savings time kicks in the first Sunday in April and ends on the last Sunday in October. That means that clocks are set ahead one hour at 2:00 AM in April and set back one hour at
2:00 AM in October. So what, you might be asking? What does daylight savings time have to do with the database market? Good questions, keep reading please.

The Energy Policy Act of 2005 changed the start and end dates for daylight savings time. Starting this year, daylight time began on the second Sunday in March and will end on the first Sunday in
November. And that means that computerized systems had to be changed to deal with this.

The biggest impact for most users was ensuring that the operating system was patched to deal with the revised, earlier start of DST. And all that most DBMS systems required was a few patches where
the engine interfaced with and used time values.

But Oracle evidently required a whole bunch of patches. As documented by Chris Foot on his dbazine.com blog (http://www.dbazine.com/blogs/blog-cf/chrisfoot/blogentry.2007-01-27.8134820403 ) there were patches for the database, for the JVM, for the time zone data types, for the Grid
Control and agents, for the e-Business suite and Application Server, and guidance for third party applications.

Whew! That was a lot. But it seems that admins everywhere were prepared as there was no news of widespread problems as a result of DST changes.

So let’s move on to more specific database-related news, starting with the open source world…

The Little Storage Engine That Could

In early January, MySQL AB announced the alpha version of its Falcon storage engine. Falcon is designed for high performance in high volume environments, and it will work with MySQL V5.1. But to
use it, any existing data must be migrated to a new format for Falcon. Additionally, the new Falcon engine is still not ready for production work, but should be used for evaluation purposes only.
Once the alpha version has been put through its paces the company will plan a beta version before the Falcon engine becomes ready for production work.

The current alpha version of Falcon works with 32-bit Windows systems as well as 32- and 64-bit Linux systems. Additional platform support is planned.

Why a new engine for MySQL? Well, as regular readers of this column may recall, Oracle one of the more popular MySQL engines, InnoDB. And back in the third quarter of 2006, MySQL dropped support
for another engine, Berkely DB, whose owner, SleepyCat Software is also now owned by Oracle. So the Falcon engine is most certainly MySQL’s insurance policy against the possibility of one of its
competitors (Oracle) owning technology (engines) that is imperative to the long-term success of the MySQL database platform.

MySQL IPO?

Perhaps even bigger news from the open source DBMS community came in late January when news of MySQL’s plans to go public was unveiled. Its plans for an IPO are not complete nor has a date been
set. But the company did indicate that it was attempting to ensure that it could go public before the end of the year if it chose to do so.

The CEO of MySQL, Marten Mickos, told Computer Business Review that the company still has not used about half of the venture capital money it raised, so there is no burning need for cash driving
MySQL to go public. Furthermore, the company has been in existence for twelve years now, it has almost 10,000 paying customers, and an installed base estimated to be close to 10 million. So MySQL
has a lot going for it.

When MySQL finally does go public it will be a bellwether, of sorts, ringing in the possibility of the long-term health and viability of the open source DBMS market. A public MySQL could do for the
open source DBMS market what a public Red hat did for the open source DBMS operating system market. Stay tuned…

How Much is That DBMS in the Window?

Also in late January, MySQL announced a one price first all licensing model for its MySQL Enterprise offering. The pricing change, touted by the company as simplified pricing, allows an
organization to install any number of MySQL Enterprise instances for a flat $40,000 per year.

MySQL Enterprise is targeted at enterprises currently using Oracle, DB2, and other larger enterprise database systems. It differs from the MySQL Community Server in that new features are
incorporated more slowly into MySQL Enterprise. Although the two are nearly identical, over time there will be some differences. In other words, MySQL Enterprise should be more stable and cost less
to support.

This pricing model offers a significantly lowered cost as compared to Oracle’s enterprise offering, which is basically the same price but only for a single CPU. The move seems to appeal to both
users and analysts. Stephen O’Grady, principal analyst for RedMonk said that “…open source has the ability to disrupt traditional enterprise software pricing (and) MySQL is attempting to prove
as much with its latest sitewide agreements.” While Glenn Bergeron, systems manager for Instaclick, said that “MySQL Enterprise has made it significantly easier to purchase database software and
technical support for our entire organization.”

What the GPL?

In late December of 2006, Kaj Arno, VP of community relations at MySQL, blogged that MySQL had changed its license to “GPL2 Only.” This caused a bit of a stir because MySQL has been involved in
the GPLv3 committee and many anticipated that MySQL would switch to the new GPL when it was available. Instead, it seems, MySQL will wait until there is broad acceptance of GPLv3 in the open source
community before adopting the new licensing.

Indeed, MySQL and open source DBMS was a hot topic this quarter. But what about the big DBMS companies?

Oracle Users Sporting Large Databases

In early January the Independent Oracle User Group (IOUG) reported the results of a recent study indicating that a third of Oracle users manage databases larger than 1 terabyte in size. This news
is interesting, but I guess it really isn’t major news that databases are getting bigger and bigger, so let’s see what else Oracle had up its sleeve this quarter.

Oracle Aids Defections with Free Migration Tools

In early January Oracle announced the release of SQL Developer 1.1 which now allows developers to browse and manage Microsoft Access, MySQL and SQL Server databases, in addition to, of course,
Oracle databases. The move, touted by Oracle as aiding multi-platform shops, was probably driven by the success of a similar, popular tool from Quest Software called TOAD.

But the heterogeneous database support in SQL Developer is still basic. With that in mind, Oracle will be integrating its Migration Workbench software with SQL Developer later this year. The
combination of SQL Developer and Migration Workbench will provide a single offering for managing and importing data from non-Oracle databases.

Oracle is working to better support smaller organizations with tools such as SQL Developer, Migration Workbench and Apex. Anything they can do to make it easier and less expensive to use Oracle in
smaller shops can help Oracle grow in the SMB space. The Return of the Patch

Oracle’s quarterly patch process is working well as exhibited in mid-January when the company unleashed 51 patches for its product lines and 34 of those address holes where the product could have
been exploited remotely without authentication.

Of the 51 fixes, 26 addressed flaws in the database product line, including 10 that the company said could be remotely exploited without the need for a username or a password.

The quarterly patch update also delivered 12 fixes for vulnerabilities in Oracle’s Application Server software, eight of which were rated critical. Additionally, three patches corrected remote
exploitation problems in Oracle’s PeopleSoft product line.

Of course, not everybody is happy with the quarterly critical patch update process Oracle has implemented. Information Week reported that Amichai Shulman, chief technology officer of Imperva, was
critical saying “…we’ve seen a lot of these same vulnerabilities, or similar vulnerabilities in previous CPUs.” And Shulman didn’t agree with Oracle’s criticality ratings saying “Oracle has
the tendency to lower the vulnerability ranking. The most serious of this CPU is [ranked] 7 out of 10. But most every security expert would have ranked that one much higher.”

But Oracle is now following a process consistently and its users can be prepared for the regular, quarterly release of patches.

Another Big Oracle Acquisition

Perhaps the biggest news of the quarter was unleashed in early March when Oracle announced its intent to acquire Hyperion Solutions for $3.3 billion. According to Hyperion’s web site they are
“the global leader in Business Performance Management software…(and) with Hyperion you can collect, organize and analyze data–then distribute it throughout your enterprise using a rich, unified
workspace.” OK, so they are a business intelligence (BI) vendor. Of course, BI means more than it used to; today it encompasses the enterprise-wide discipline of using data, analyzing information,
making decisions and managing performance.

You’ve probably heard of, or used Hyperion’s Essbase product. Essbase (the name comes from “Extended Spread Sheet database”) was originally developed by Arbor Software and was marketed as a
multi-dimensional DBMS. Arbor was acquired by Hyperion in 1998. DB2 users might remember using it as DB2 OLAP Server because Essbase was marketed by IBM under than name until late 2005.

At any rate, this is another big acquisition for Oracle and it boosts their BI capabilities significantly. Hyperion has over 12,000 customers worldwide, including 91 of the Fortune 100.

In addition to Essbase, Hyperion offers strong packaged analytics applications. These applications offer a business focused approach, comprising techniques that help build models and simulations to
create scenarios, understand realities, and future states. Whereas traditional business intelligence enables us to understand the here and now, and even some of the why, of a given business
situation, advanced analytics goes deeper into the “why” of the situation to deliver likely outcomes.

And Oracle, in a letter to customers, indicates that “(t)he transaction yields immediate benefit to both Hyperion and Oracle customers as the companies’ products are already integrated, with
thousands of successful joint deployments. The proposed combination extends Hyperion’s capabilities beyond the finance department with analytic applications and complementary BI tools from
Oracle.”

So this acquisition bolsters Oracle’s BI capabilities and raises it up as a serious contender. An aspect of this acquisition that is particularly interesting is that it raises questions about what
will happen next in the BI market, which appears to be ripe for consolidation. And it begs the question of how SAP will react.

Here are some of the scenarios I can imagine: many of the pure play BI and analytics vendors will become targets for larger vendors looking to keep pace with Oracle. Likely acquisition targets
include Cognos, Business Objects, Microstrategy, and perhaps even SAS. And Teradata, newly independent (see next section), could also be a takeover target because of its strong advanced analytics
offerings. Lesser potential acquisition targets could include Sybase and Information Builders.

Okay, so who would do the acquiring? The most likely candidates are IBM, as they work to stay in stride with Oracle; and HP, as they build their DW/BI portfolio. And don’t rule out SAP, although
they may be more interested in continuing to build internally. Finally, Oracle may not be done in this space, yet, so keep your eyes open and watch this space as the consolidation of the BI market
heats up.

NCR Spins Off Teradata

In early January, NCR Corporation (NYSE: NCR) announced its intent to spin off its Teradata Data Warehousing business as a separate company. The move creates another independent, publicly traded
DBMS company with annual revenue of $1.5 billion. And Teradata’s operating income for 2005 was $309 million.

NCR CEO Bill Nuti said the deal would provide sharper focus for each company. “Teradata and the new NCR operate in different markets, each with solid prospects for the future, but they have
markedly different business models,” Nuti said in a statement. “Both new companies should benefit from sharper management focus on their unique business opportunities.”

The plan calls for Mr. Nuti to continue as head of NCR, and Mike Koehler to ascend from senior vice president of the Teradata division to become president and CEO of Teradata.

The spin off is expected to be completed in six to nine months, subject to registration of the new security with the Securities and Exchange Commission and certain other customary conditions.

This move makes sense for a number of reasons including the Teradata’s strong position in the data warehousing market, the current consolidation occurring in that market, and the lack of actual
synergy between NCR’s core business and Teradata’s business.

Teradata Partners With Microsoft

Somewhat hot on the heels of the previous announcements Microsoft and Teradata announced that they were “working together to optimize interoperability between Microsoft business intelligence
solutions and the Teradata Enterprise Data Warehouse to help information workers gain access to, analyze and report on critical data more quickly, and help streamline the delivery of business
intelligence applications.”

Yes, I did copy that right out of the press release. What it amounts to is basically a partnering agreement that will focus on improving interoperability between Teradata’s Enterprise Data
Warehouse and Microsoft SQL Server Analysis Services. Interoperability between SQL Server 2005 Reporting Services, SQL Server 2007 Integration Services, and the 2007 Microsoft Office system will
also be part of the effort.

The partnership should bolster both organizations as the two companies share technology and resources to improve the synergy between their collective DW and BI offerings. Mutual customers seem to
agree: “Microsoft and Teradata are two of our strategic industry partners, and we are thrilled with the increased collaboration between the companies’ technologies,” said Bill Noakes, executive
vice president and chief information officer at Meijer Inc.

Big Blue Revenue Rolls In

In January IBM announced quarterly revenue of $26.3 billion for the quarter ending December 31, 2006. And that is good news because the total represents an increase of 7.5 percent over the same
quarter last year as well as beating consensus Wall Street estimates of about $25.6 billion. Profits also came in at a record of $3.6 billion for the quarter.

Tellingly, IBM’s middleware brands registered a strong increase. The middleware brands include WebSphere, Information Management (the database stuff), Tivoli, Lotus and Rational. The middleware
products accounted for $4.4 billion in revenue, and that is an increase over last year of 18 percent. Indeed, Mark Loughridge, IBM’s CFO, commented that all of IBM’s middleware brands grew faster
than the overall market.

And 2006 looked good from the perspective of the full year, too. Total revenue reported by IBM for 2006 was $91.4 billion, an increase of 4 percent over 2005 – and annual net income came in at
$9.49 billion, an increase of 19.6 percent over last year’s $7.93 net income.

So money is rolling in at IBM – and their database offerings are a big factor in that success.

IBM Acquires Softek

And with that money, IBM write a check to acquire Softek Storage Solutions, Corporation. In late January IBM announced that it would be acquiring this provider of data migration solutions.
According to the Softek web site, its Nonstop Data Mobility solutions provide a simple, unified approach to moving data across the enterprise as part of any IT infrastructure change on any storage,
platform or distance – with zero application downtime or performance impact.

IBM has been a Softek global partner since 1996, and has used Softek’s products to migrate data on thousands of services engagements worldwide. IBM will integrate Softek’s data mobility
technology and best practices with IBM’s methods and expertise in storage and data services. The acquisition will improve IBM’s ability to efficiently and reliably move data to better respond to
changing market dynamics.

IBM Goes GA With DB2 9 for z/OS

In early March, IBM announced that “the DB2 9 Viper data server is now available for System z customers.” So, what can you expect in terms of new features from DB2 9? Well, the big one is called
pureXML. Basically, pureXML allows you to store data as native XML. With pureXML users can search and analyze structured data in a relational data repository and unstructured data in an XML
repository without the need to reformat it. The approach utilizes dual storage engines – when you want to store XML in DB2 9 you no longer have to store it as a CLOB or shred it into tables.

What else is new? DB2 Version 9 expands support for online schema changes renaming it as Database Definition On Demand (DDOD). New DDOD capabilities include replacing one table quickly with another
using cloning and the ability to rename columns and indexes.

Online table space reorganization is significantly improved, new security and compliance features have been added, there is a new type of table space that combines the attributes of segmented and
partitioned, and there are new data types and many new SQL capabilities. Keep in mind though, that there are many more features of DB2 9 than I can adequately cover in a column of this nature.

The bottom line is that mainframe DB2 users can look forward to another new version of DB2 with a lot of nice, new functionality.

A Warehouse Balancing Act

In mid-March IBM divulged its strategy for enabling dynamic warehousing, billed by IBM as a new generation of business intelligence capabilities that enable organizations to gain real-time insight
and value from their business information. Basically, this is IBM’s advanced analytics play and although it was likely “in the works” before Oracle announced its intent to acquire Hyperion, it
seems to be a reaction to that Oracle announcement. And, of course, IBM is building its dynamic warehousing initiative on their DB2 9 “Viper” data server.

Additionally, IBM announced the IBM Balanced Warehouse, which it touts as the next evolution of the Balanced Configuration Unit (BCU), to provide complete warehousing solutions with pre-configured
software, hardware and storage, enabling faster implementation times with lower risk.

Summary

And so ends another quarter. The new year is off and running with a head of steam in the database market, and you won’t want to miss anything that transpires next quarter, will you? So be sure to
visit TDAN.com each quarter to read each new installment of The Database Report.

Share this post

Craig Mullins

Craig Mullins

Craig S. Mullins is a data management strategist and principal consultant for Mullins Consulting, Inc. He has three decades of experience in the field of database management, including working with DB2 for z/OS since Version 1. Craig is also an IBM Information Champion and is the author of two books: DB2 Developer’s Guide and Database Administration:The Complete Guide to Practices and Procedures. You can contact Craig via his website.

scroll to top