Event Handler¶
The event handler can subscribe and handle events in Lime CRM. It is possible to publish events from any service.
Lime Object Events¶
Events are published for the following lime object lifecycle events;
new
update
delete
restore
They are published to routing keys of the following format: core.lime.[limetype].[event].v1
Examples:
core.limeobject.#
matches all events for all kinds of lime objectscore.limeobject.*.new.v1
matches new events for any kind of lime objectcore.limeobject.person.#
matches any event related to Person lime objectscore.limeobject.deal.update.v1
only matches events published when Deals are updated
More information about topic routing with RabbitMQ can be found here: https://www.rabbitmq.com/tutorials/tutorial-five-python
Configuration¶
Connection to RabbitMQ can be configured as described here.
Configuration of the service is set as:
Number of Workers¶
The event handler service can run multiple workers, each targeting a specific set of queues. Workers let you isolate event handlers and applications, so one group of tasks doesn’t block others.
Any event handlers not assigned to a named worker will run in the default worker.
Example problem¶
Events processed by the web client are delayed because the service is waiting for responses from webhooks.
Solution¶
Run the lime-webhooks
queues in a dedicated worker. This prevents slow webhook processing from blocking the lime_webclient
queues.
Example configuration¶
event_handler:
prefetch_count: 1 # Default prefetch count for event handlers
workers:
webhooks:
queue_prefix: lime-webhooks # Run webhook queues in a separate worker
webclient:
queue_prefix: lime_webclient # Run webclient queues in a separate worker
prefetch_count: 1000 # Webclient events are frequent & fast
solution:
queue_prefix: solution # Run solution queues in a separate worker
default:
prefetch_count: 100 # Default prefetch count for all others
Prefetch count guidelines¶
prefetch_count
controls how many messages a worker can fetch from a queue before acknowledging them.
- Low values → better for slow or blocking tasks (ensures work is spread evenly across workers).
- High values → better for fast, lightweight tasks (reduces round-trips and increases throughput).
Rule of thumb:
- Blocking or I/O-heavy tasks →
prefetch_count: 1–10
- Fast, CPU-light tasks →
prefetch_count: hundreds or thousands
Catching Exceptions¶
catch_handler_exceptions
sets how the handler should react to any uncaught exception. The default behavior is to allow the event handler to crash on any event that throws an unhandled exception. Setting this setting to True
prevents the crash and instead moves the failing event to a dead letter queue where it can be requeued or removed.
Our current recommendation is to leave this setting off and instead make sure to catch and properly handle any possible exceptions in the event handler itself. This configuration helps for instances where a customisation is not able to properly handle an exception and the event handler cannot be allowed to crash.
Changing this setting requires the relevant queue to first be deleted so that it can be re-created with the correct settings for the Dead Letter Queue.
Dead Letter Queue¶
The Dead Letter Queue (DLQ) is a built-in feature in RabbitMQ that instead of removing a failed event moves it to a separate queue where it can either be requeued or deleted. The default duration that events will remain in the DLQ for is 30 days but this can be adjusted by changing the Time To Live (TTL) value at dlq_ttl
in the configuration. This value is set in number of seconds.
Managing the Dead Letter Queue¶
Any failed messages are moved to queue with the name format of lime.event_handler.{app name}.dlq
e.g. lime.event_handler.solution-cool-solution.dlq
.
There is no graphical interface for managing the DLQ right now. This can either be managed through the RabbitMQ interface at http://hostname:15672/#/queues or through REST requests to the solution backend.
Viewing Broken Events¶
This is useful to see whether anything has ended up in the DLQ. The response shows the total number of events as well as a list of the broken events.
Re-queueing Broken Events¶
In case the solution can be updated to properly handle the broken events i.e. if the problem was in the code that handles the event it would make sense to requeue the events. This is done through a PUT request to the endpoint.
Removing Broken Events¶
If the problem is with the content in the events themselves i.e. if the problem comes from an external integration or the code that created the event it would instead make sense to delete the events. This is done with a DELETE request to the endpoint.