Skip to content

Best Practices

Use these best practices when building workflows.

Design Small, Focused Workflows

  • Keep each workflow focused on one business outcome — for example, "Create company from form submission" rather than "Handle all form submissions".
  • Split complex logic into sub-workflows using the Execute Workflow node. This makes each piece independently testable and reusable.
  • If a workflow has more than 15–20 nodes, consider breaking it apart.

Always Use the Lime CRM Nodes

Never use the generic HTTP Request node to call the Lime CRM API. The Lime CRM and Lime CRM Trigger nodes handle authentication, error formatting, and webhook lifecycle management automatically.

# Wrong
HTTP Request → GET https://your-instance.lime-crm.com/your-database/api/v1/limeobjects/company/

# Correct
Lime CRM → Data → getManyObjects → limetype: company

Choose Robust Triggers

  • Prefer event-driven triggers (Lime CRM Trigger) over Schedule-based polling when possible. Event-driven workflows respond immediately and do not waste API quota.
  • Always validate the trigger payload before processing. All field keys are always present — check that the values you depend on are non-null and have sensible content.
  • The trigger payload includes all field values — you do not need to follow every trigger with getSingleObject. Only fetch the full object when you need data from a related record (for example, fetching the company linked to a deal).
# Only trigger and action needed when all data is in the payload
Lime CRM Trigger → [process] → action

# Fetch a related record when you need it
Lime CRM Trigger → getSingleObject (related record) → [process] → action

Build for Retries and Failures

  • Make steps idempotent — running the same step twice should produce the same result. Use upsert patterns (getManyObjects to check existence → update or create) rather than blind creates.
  • Add explicit error branches using onError: continueErrorOutput on critical nodes, then route the error output to a logging or alerting step.
  • Always include enough context in error logs: the workflow name, the record ID that failed, and the error message.
  • For integrations with external systems, handle transient failures (timeouts, 429 rate limits) with retry logic or alerting so they do not go unnoticed.

See Error Handling for the full layered error handling strategy.

Protect Data and Credentials

  • Never hardcode credentials in workflow nodes or expressions. Always use named credential references.
  • Never embed real API keys, passwords, or webhook secrets in workflow JSON — even in comments or sticky notes.
  • Minimize personal data in payloads sent to external systems. Only include the fields the integration actually needs.
  • Validate and sanitize inbound data from webhooks before using it in CRM operations.

Keep Integrations Observable

  • Log key state transitions, not every field. For example: "deal moved to Won" is more useful than dumping the entire deal object.
  • Use a consistent record identifier across all systems involved in an integration. Use the ID from the master system — if the ERP owns the company record, use the ERP's ID as the correlation key in Lime CRM, log entries, and outbound calls. If Lime CRM is the master, use _id.
  • Write integration activity to Lime CRM where practical — for example, a history note or dedicated monitoring Limetype. This gives visibility into what happened without needing access to workflow execution logs.
  • Track success rate, failure rate, and processing time for critical workflows.

Control Performance and Load

  • Set explicit limits on getManyObjects queries. Think carefully about expected data volumes — fetching unlimited records should always be a deliberate choice, not a default.
  • Use bulk operations (bulkCreateManyObjects, bulkUpdateManyObjects) only for large datasets (100+ records) where no Python logic, Lime Automations, or webhooks need to fire. Bulk operations bypass the entire application layer.
  • Respect the Lime CRM API rate limit: 3 000 requests per 5 minutes on Cloud. Bulk operations are more efficient than individual CRUD calls at scale.
  • Use asynchronous patterns (Execute Workflow node, separate workflows) for long-running tasks so that the triggering workflow can complete quickly.
  • Terminate sub-workflows with a minimal return node when calling them in a loop. Each call to a sub-workflow via Execute Workflow returns data to the parent — this return data accumulates in the parent workflow's memory across all iterations, even though the sub-workflow's internal memory is freed after each run. Add an Edit Fields node as the last node in the sub-workflow, configured in Manual Mapping mode with a single field: status = "ok". This ensures the sub-workflow passes back almost nothing regardless of how much data it processed internally.

Document Your Workflows

Use sticky notes and node notes to make workflows easy to understand at a glance.

Sticky Notes

Use sticky notes to document the purpose and assumptions of a workflow or a complex section. They appear as colored cards on the canvas.

When to add a sticky note:

  • At the start of every workflow — describe what it does and why
  • Before a complex section with non-obvious logic
  • To document assumptions (for example, "assumes one active company per org number")
  • On error handling branches to explain what is logged and where

Format:

## Workflow Name
Short description of what this workflow does and why it exists.

**Trigger:** Deal is created
**Action:** Creates a follow-up todo 7 days out

**Assumptions:**
- The deal always has a linked company
- Runs in production only

Color convention:

Color Use
Grey General documentation
Blue Data flow or architecture notes
Yellow Warnings or important assumptions
Red Known limitations or temporary workarounds

Node Notes

Use the node Notes field (in the node settings panel) for inline documentation on individual nodes.

  • Keep notes to 1–2 sentences
  • Explain why, not what — the node's name already says what it does
  • Document non-obvious configuration choices

Examples:

# On a getManyObjects node:
"Fetches only active companies to avoid re-processing deactivated records."

# On an IF node:
"Routes to update branch if a record with this org number already exists."

# On a bulk import node:
"Uses bulk to handle volume; automation triggers are not required for this migration."

Naming Conventions

Consistent naming makes workflows easier to search, audit, and maintain. Follow these conventions for all workflows, nodes, and credentials.

Workflow Names

Use natural language that makes the purpose of the workflow clear at a glance — no need to read the workflow to understand what it does.

Format: [Action] [object] [when/on] [event or condition]

  • Title case
  • Keep it concise but specific
  • For reusable utility workflows called via Execute Workflow: prefix with Util:

Examples:

Notify Slack When Deal Is Won
Sync Company to ERP on Update
Nightly Export of Active Deals to BI
Create Invoice on Payment Received
Util: Log Integration Error
Util: Handle Global Errors

Avoid:

My workflow              # not descriptive
test123                  # no context
Deal workflow v2         # unclear purpose, version suffix
New workflow (copy)      # copy artifact — rename immediately
crm-trigger-deal-notify  # machine-style slugs are harder to scan

Node Names

Use descriptive names that explain what the node does, not the node type. The node type is already visible from its icon.

Format: {Action} {Entity} — title case, concise

Examples by node type:

Node type Good name Avoid
Lime CRM Trigger Deal created Lime CRM Trigger
Lime CRM (getSingleObject) Get deal Lime CRM
Lime CRM (getManyObjects) Find company by org number Get many
Lime CRM (createSingleObject) Create follow-up todo Create
Lime CRM (updateSingleObject) Mark deal as Won Update deal
IF Is company active? IF
Set Map ERP fields Set
HTTP Request POST to ERP API HTTP Request
Execute Workflow Log error Execute Workflow
Code Parse CSV rows Code

Tags

Assign tags to every workflow for easy filtering.

Recommended tags:

Tag When to use
integration Connects to an external system
util Utility or sub-workflow only
sync Periodic data synchronization
migration One-time or recurring data migration
notification Sends alerts, emails, or messages
ai-agent Uses an AI Agent node

A workflow can have multiple tags: ai-agent, integration, sync.

Credential Names

Format: {System} - {Instance/Environment}

Lime CRM - Production
Lime CRM - Test
Slack - Customer Success Team
External ERP - ACME Corp Production

Use separate credentials for each environment. Never reuse a production credential in a test workflow.

Avoid Common Anti-Patterns

Anti-pattern Problem Better approach
Fetching all CRM objects without a filter Slow, unpredictable result set size, can cause timeouts Add a query filter
Using generic HTTP Request for Lime CRM API calls No error handling, auth complexity Use the Lime CRM node
Using bulk import when automations must fire Business logic is silently skipped Use individual CRUD with the upsert pattern
No error handling on webhook triggers Silent failures Add error output routing on the first processing node
Storing real credentials in workflow JSON Security risk Always use named credential references