Event Handler¶
The event handler can subscribe and handle events in Lime CRM. It is possible to publish events from any service.
Lime Object Events¶
Events are published for the following lime object lifecycle events;
newupdatedeleterestore
They are published to routing keys of the following format: core.lime.[limetype].[event].v1
Examples:
core.limeobject.#matches all events for all kinds of lime objectscore.limeobject.*.new.v1matches new events for any kind of lime objectcore.limeobject.person.#matches any event related to Person lime objectscore.limeobject.deal.update.v1only matches events published when Deals are updated
More information about topic routing with RabbitMQ can be found here: https://www.rabbitmq.com/tutorials/tutorial-five-python
Configuration¶
Connection to RabbitMQ can be configured as described here.
Configuration of the service is set as:
Number of Workers¶
The event handler service can run multiple workers, each targeting a specific set of queues. Workers let you isolate event handlers and applications, so one group of tasks doesn’t block others.
Any event handlers not assigned to a named worker will run in the default worker.
Example problem¶
Events processed by the web client are delayed because the service is waiting for responses from webhooks.
Solution¶
Run the lime-webhooks queues in a dedicated worker. This prevents slow webhook processing from blocking the lime_webclient queues.
Example configuration¶
event_handler:
prefetch_count: 1 # Default prefetch count for event handlers
workers:
webhooks:
queue_prefix: lime-webhooks # Run webhook queues in a separate worker
webclient:
queue_prefix: lime_webclient # Run webclient queues in a separate worker
prefetch_count: 1000 # Webclient events are frequent & fast
solution:
queue_prefix: solution # Run solution queues in a separate worker
default:
prefetch_count: 100 # Default prefetch count for all others
Prefetch count guidelines¶
prefetch_count controls how many messages a worker can fetch from a queue before acknowledging them.
- Low values → better for slow or blocking tasks (ensures work is spread evenly across workers).
- High values → better for fast, lightweight tasks (reduces round-trips and increases throughput).
Rule of thumb:
- Blocking or I/O-heavy tasks →
prefetch_count: 1–10 - Fast, CPU-light tasks →
prefetch_count: hundreds or thousands