Skip to content

Lime Task Handler

Lime Task Handler is a wrapper around Celery.

Run lime-task as a service

The service accepts the same options as starting a Celery worker.

  • Loglevel: The default Celery loglevel is warning. It can be very helpful for local development to explicitly lower it to info.
lime-task-handler --loglevel info
  • Namespace: If you have configured a namespace in your config.yaml file like shown below you need to tell the Task Handler to consume from that queue when you start the service. Otherwise the service will only consume from lime_task_queue_default.
globals:
    namespace: <NAMESPACE>
lime-task-handler --queues lime_task_queue_<NAMESPACE>
  • Scheduled tasks: In order to run tasks on a schedule, the Task Handler need to be started "on beat" using the --beat flag.
lime-task-handler --beat

All these options can be combined into one command:

lime-task-handler --loglevel info --queues lime_task_queue_<NAMESPACE> --beat

Alternatively make sure that the lime docker container taskhandler is running.

Depending systems

The Task Handler uses RabbitMQ as a message broker and saves the results in either Elasticsearch or Redis, so make sure those services are running (docker service names: rabbitmq and elastic/redis)

Configuration

Service can be configured as:

tasks: 
    broker_connection_string: amqp://
    backend_connection_string: elasticsearch://localhost
    elastic_connection_string: elasticsearch://localhost
    task_time_limit: None
    task_soft_time_limit: None
    task_queue_name: lime_default_task_queue
    task_exchange_name: lime_default_task_exchange
    task_routing_key_name: lime_default_routing_key
    enable_scheduled_tasks: True
    enable_system_scheduled_tasks: False

features: 
    importer_with_taskhandler: False

importer: 
    connection_string: amqp://guest@localhost//
    use_sql_server: False
    jobs_days_visible: 30
    sql_server_host: localhost
    sql_server_database: lime_crm_import
    sql_server_username: 
    sql_server_password: 
    use_s3: False
    s3_bucket: 
    s3_region: 
    s3_aws_access_key_id: None
    s3_aws_secret_access_key: None