The "Voice Commands" module allows applications to easily use speech recognition for Command & Control speech recognition. Applications merely have to provide a list of commands that they understand, and they will be notified when a command is spoken. The list of available commands is called a "Voice Menu" because it is very similar to a Windows Menu, which is a list of commands (menu items) that the user can select with a mouse or keyboard.
We are using a similar metaphor for the "Voice Dictation" module that allows users to dictate into applications. The Voice Dictation Module's infrastructure and interfaces are similar to an edit box, just like Voice Commands is similar to a windows menu.
The "Voice Dictation" object allows applications to easily provide dictation. The object uses the metaphor of an invisible edit box. Whenever the user speaks a word, the text is entered directly into the invisible edit box. The application is then notified that the text in the edit box has changed, and how it has changed. Since the invisible edit box cannot be since, the application needs to display any changes to the user. An application will most likely maintain its own rich-text control and be forced to keep the invisible dictation "edit box" and rich-text synchronized.
Of course, if an application merely wanted to transcribe the words spoken by the user, it would be easier to paste the PhraseFinish() results from a dictation engine directly into a text box. However, Voice Dictation provides more functionality than just transcription. It provides automatic text formatting (capitalization, spacing), translation of punctuation words into symbols, built in glossary entries, limited commands, and a GUI so users can correct the dictation engine. It has the potential to support more functionality in the future, especially when continuous dictation becomes widespread.