Descriptif du poste
Location requirement
This role is remote, but you'll need to be based in one of our team hubs: Lille (HQ), Paris, Lyon, Rennes, or Madrid. You'll work from your local office at least one day per week.
As a Delivery Engineer at Feedier, you will own the deployment and technical enablement of our AI-native platform for enterprise customers. You are part of our Delivery Team.
You'll work directly with clients to deploy, configure, and tailor Feedier to their specific workflows (across three critical pillars: AI deployment, data integrations, and platform configuration).
AI is making deployment harder, not easier. AI systems require fine-tuning, prompt engineering, evaluation, and ongoing adjustment. You can't throw an LLM over the wall and expect customers to figure it out. That's why this role exists.
This role is both: technical and client facing.
Key Responsibilities
You own the deployment function for our customers. This is strategic work: every successful deployment accelerates time-to-value and directly impacts retention and expansion.
What does it mean?
Lead the technical deployment of the Feedier Platform, including AI configuration (RAG, observability, models), data integrations (N8N connectors, attributes), and platform setup (SSO, emails, workflows).
Build trusted, long-term relationships with client stakeholders (ops, IT, leadership). Leverage your technical proximity — integration usage, platform adoption patterns, connectors — to proactively identify risks, upsell opportunities, and new use cases before they surface in business reviews.
Translate customer business requirements into concrete platform configurations, automations, and integrations (APIs, webhooks, data pipelines).
Build and maintain connectors and lightweight automations (N8N) that extend the platform for specific client use cases.
Collaborate daily with Account management, Product, and Engineering teams to resolve blockers, surface product gaps, and feed the roadmap with field insights.
Create and maintain deployment documentation, playbooks, and knowledge base articles to scale the function.