Large Value Fields

Long Binary (OLE Object) and Memo fields are collectively referred to as “large value” fields because they are typically much larger than fields of other data types. A record containing one or more large value fields usually exceeds the 2 KB size limit of a record. When this happens, each large value field is represented in the record by a pointer, which references one or more separate 2 KB memory pages on which the data is actually stored.

When you query tables containing large value fields, don’t include those fields in the field list unless you absolutely need them, because returning large value fields takes time.

A snapshot- or forward-only-type Recordset object opened against large value fields in an .mdb file does not actually contain the large value field data. Instead, the Recordset object maintains references to the data in the original tables, the same way a dynaset references all data.

Handling Large Value Data

Sometimes you’ll need to read or copy data from a large value field when you don’t have sufficient memory to copy the entire field in a single statement. You instead have to break up the data into smaller units, or “chunks,” that will fit available memory. The FieldSize method tells you how large the field is, measured in bytes. Then you can use the GetChunk method to copy a specific number of bytes to a buffer, and use the AppendChunk method to copy the buffer to the final location. You then continue using the GetChunk and AppendChunk methods until the entire field is copied.