Building an Exchange Store Event Sink with C#
When you need to process every incoming message in an Exchange mailbox without touching the UI, the most reliable path is to tap into the store itself. Exchange exposes a low‑level event sink API that lets managed code run inside the COM+ runtime whenever a store operation occurs. The core of that approach is a C# assembly that registers itself as an event sink against the store's IExStore interface. Once registered, your code receives notifications such as OnSyncSave whenever an item is written, and you can decide what to do next.
The reference material that guided the initial build was Logu Krishnan’s post “Developing Managed Event Sinks/Hooks for Exchange Server Store using C#” on CodeProject. That article walks through the COM+ registration steps, the interface implementation, and the wiring of the event sink to the store. Coupled with the Microsoft Exchange SDK samples, it gives a solid foundation. The steps below distill the key parts and add a few practical details that help avoid the most common pitfalls.
1. Install the Exchange SDK and the .NET Framework developer pack that includes the Interop assemblies for Microsoft.Exchange.Data.Storage. These assemblies expose the COM interfaces you need to reference from C#. Make sure the SDK version matches the Exchange version you are targeting, as interface signatures can drift between releases.
2. Create a new Class Library project in Visual Studio and add references to the following COM objects: Microsoft.Exchange.Data.Storage, ADODB (for the legacy database connectivity), and the .NET wrapper for Exoledb if you prefer a managed API. Once the references are in place, set the project’s target framework to .NET 4.8 or later; this ensures compatibility with the latest Windows Server editions.
3. Implement the IExStoreEventSink interface. The most important method is OnSyncSave, which the store calls whenever an item is written to a mailbox. In addition to this, you should provide stub implementations for the other event methods (e.g., OnSyncDelete, OnSyncCreate) to keep the compiler happy. Even if you don’t use them, returning (S_OK) is enough to keep the sink alive.
4. Register the assembly with COM+ so that the runtime knows to load it when a store event fires. You do this by opening the Component Services console, navigating to “COM+ Applications”, and creating a new application. Right‑click the application, choose “Add”, then “New Component”. Browse to the compiled DLL and finish the wizard. The key setting is the “Server‑side component” option, which tells Exchange to load your code on the same machine that hosts the mailbox database.
5. Configure the event sink’s priority and lifetime. In the COM+ console, open the component properties, go to the “Events” tab, and set the sink’s priority to OnSyncSave and OnSyncCommit. This ensures your code runs during the commit phase of the transaction, giving you access to the final state of the message. Set the lifecycle to “Single‑ton” so that only one instance serves all events; this keeps memory usage low.
6. Build the project, copy the DLL to the server, and restart the COM+ application. After a restart, the store will begin routing OnSyncSave events to your implementation. You can verify this by placing a breakpoint in the method or by logging a message to a file. If you see the log entries, the sink is active and ready for the next steps.
Once the sink is up, you can start filtering incoming messages, creating corresponding records in Microsoft CRM, and even deciding whether to keep or delete the message. The rest of this guide walks through the logic that makes that possible.
Implementing OnSyncSave for Targeted Mail Processing
The OnSyncSave event receives three parameters: the event info structure, a string representing the URL of the item being written, and a flag set that describes the operation. For a mail handler that acts on delivered messages, you care about the combination of the EVT_SYNC_COMMITTED and EVT_IS_DELIVERED flags. Those flags indicate that the message has passed the transport pipeline and is being committed to the mailbox store.
Inside the event handler you typically want to wrap the entire body in a try‑catch block to surface any unexpected failures back to the logging framework. A sample implementation follows, with a brief commentary on each step:
In the above code, ProcessMessage is where you perform your business logic: you might parse the message body, extract identifiers, create a lead record in CRM, or flag the message for follow‑up. By filtering on the flag set, you avoid processing drafts or internally generated items that have not yet hit the mailbox.
Because the event runs inside the store’s transaction, you can safely create external records before deleting the message. If anything fails during ProcessMessage, the exception will bubble back up, and the store can roll back the transaction if the sink signals failure. In practice, we prefer to let the event finish and handle errors through logging; the message will still be delivered to the user, and we can re‑run the logic later if needed.
Another nuance is that the event receives a URL that points to the message in the store’s native format. That URL is required for any subsequent operations, such as opening the item with ADODB or calling the Exchange Managed API. Storing the URL in a variable makes the code cleaner and ensures you pass the same reference through all helper methods.
When building for a production environment, be mindful of thread safety. The event sink can be called on multiple threads simultaneously if the server processes many concurrent deliveries. Guard any shared resources with locks or use thread‑safe collections. The sample below demonstrates a simple lock around a shared CRM client object to avoid race conditions when multiple messages are processed at once.
Finally, you should test the handler in a controlled lab environment first. Deploy the assembly to a test Exchange server, send a handful of emails, and monitor the logs. Confirm that ProcessMessage runs exactly once per message and that the database record appears as expected. Once satisfied, you can roll the changes to the production server.
Deleting the Email from the Exchange Store
After the message has been processed and you have the assurance that the data is safely persisted elsewhere, the next step is to remove the email from the mailbox. Exchange exposes a low‑level OleDB provider, exoledb.datasource, which lets you open a stream to the message and delete it directly. The following method shows a typical pattern, including connection handling and error logging.
Several points deserve emphasis:
- The connection string uses
exoledb.datasourceas the provider. This is the only provider that supports theDeleteRecordoperation on a message URL. - Always check the connection state before proceeding. A state of 1 means the connection is open and ready.
- The record is opened with
adModeReadWriteso that the delete operation can modify the store. TheadFailIfNotExistsflag ensures you get an exception if the URL is stale. - When calling
DeleteRecord, the second parameter is a boolean that determines whether the deletion is committed immediately. Passingfalsedefers the commit until the store transaction completes, which is the safest approach when running inside an event sink. - Finally, always close both the record and the connection. Failing to do so can leave orphaned handles that accumulate over time.
In a real deployment, you might wrap
DeleteMessageinside a transaction that also updates CRM or logs the action. That way, if the deletion fails for any reason, you can roll back the external changes and keep the message in the mailbox. The pattern shown above keeps the logic straightforward and aligns with the event’s transactional nature.Testing this method is straightforward: after processing a message, call
DeleteMessagewith the URL you received from the event. Refresh the mailbox and confirm the email no longer appears. If you still see the message, check the log for “Bad Connection” or “DeleteRecord” exceptions. Those clues will guide you to the missing configuration or permission issue.Logging, Debugging, and Permission Configuration
When working inside COM+ and Exchange, visibility into the code’s behavior is critical. The team found that log4net, configured with a RollingFileAppender, provides the most reliable source of truth. Each COM+ instance can write to a separate file, or you can use a RemoteAppender to ship logs to a central server. The official log4net site
<configuration></p> <p> <configSections></p> <p> <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net"/></p> <p> </configSections></p> <p> <log4net></p> <p> <appender name="RollingFile" type="log4net.Appender.RollingFileAppender"></p> <p> <file value="C:\Logs\ExchangeSink.log"/></p> <p> <appendToFile value="true"/></p> <p> <rollingStyle value="Size"/></p> <p> <maxSizeRollBackups value="5"/></p> <p> <maximumFileSize value="10MB"/></p> <p> <staticLogFileName value="true"/></p> <p> <layout type="log4net.Layout.PatternLayout"></p> <p> <conversionPattern value="%date [%thread] %-5level %logger - %message%newline"/></p> <p> </layout></p> <p> </appender></p> <p> <root></p> <p> <level value="DEBUG"/></p> <p> <appender-ref ref="RollingFile"/></p> <p> </root></p> <p> </log4net></p> <p></configuration></p>





No comments yet. Be the first to comment!