In general, creating indexes to support your most frequent and important Db2 SQL queries is a good idea. But the size of the table will be a factor in decided whether to index at all and/or how many indexes to create.
For tables more than 100 (or
so) pages, it usually is best to define at least one index. This gives Db2 guiidance on how to cluster the data. And, for the most part, you should follow the general advice of having a primary key for every table... and that means at least one unique index to support the primary key.
If the table is
large (more than 20,000 pages or so), you need to perform a balancing act to
limit the indexes to those absolutely necessary for performance. When a large
table has multiple indexes, data modification performance can suffer. When
large tables lack indexes, however, access efficiency will suffer. This fragile
balance must be monitored closely. In most situations, more indexes are better
than fewer indexes because most applications are query-intensive rather than
update-intensive. However, each table and application will have its own characteristics and requirements.
For tables containing a small
number of pages (up to 100 or so pages) consider limiting indexes to those
required for uniqueness and perhaps to support common join criterion. This is a reasonable approach because such a small number of pages can be scanned as, or more, efficiently than using an index.
For small tables you can add
indexes when the performance of queries that access the table suffers.
Test the performance of the query after the index is created, though, to ensure
that the index helps. When you index a small table, increased I/O (due to index
accesses) may cause performance to suffer when compared to a complete scan of
all the data in the table.