In some cases, you may want to denormalize the database model. Targeted denormalization of specific tables can improve performance. Denormalization may take place at the entity or attribute level, and therefore requires a knowledge, as previously discussed, of how the data will be accessed and used. Denormalization may be accomplished in many different ways based upon myriad performance considerations. However, the following tactics have proven useful for improving performance.
If the normalized design results in many 4-way or greater join relationships, consider introducing redundancy at the attribute (column) or entity (table) level as follows:
Redefine attributes to reduce the data width of an entity as follows:
Appropriate attribute candidates are large keys and text fields. Therefore, a multiattribute key on which many joins will take place can be replaced with a smaller contrived key. A text field can be redefined or augmented by an abstract field of 255 characters or less.
Redefine entities (tables) to reduce the overhead of extraneous attribute (column) data or row data. Appropriate types of associated denormalization are:
This scenario requires that the primary key be duplicated in each new table. Such a design may help with concurrency and results in narrower tables.
This scenario works well on entities (tables) with the potential for large data volumes. This is also a good tactic if data rows are accessed as subsets by logical workgroups (department, market segment, geography, and so on). Additionally, if a subset of the larger data set is the most active, the subset is a candidate for segmentation.