Senior Backend Engineer (LLM/AI Experience)
Full-time | Tech Team
📍 Barcelona (hybrid) or remote within Europe
💰 €50,000 – €65,000
threesixfive is working with a Barcelona-based AI product company that builds production-grade conversational AI systems used by enterprise customers.
The company operates at the intersection of LLMs, real-time backend systems, and cloud-native architecture, delivering intelligent AI experiences at scale rather than prototypes or experiments. Their platform is already live with paying customers and continues to grow rapidly.
As part of this partnership, we’re hiring a Senior Backend Engineer to play a key role in designing, building, and operating the backend systems that power these AI-driven experience.
In a few words
Build and scale backend systems powering production conversational AI
Hands-on senior role balancing architecture and real-world delivery
Work deeply with LLMs, RAG pipelines, and real-time systems
📍 Barcelona (hybrid) or remote in Europe | 💰 €50k–€65k
Why this role is exciting
You’ll help shape the backend foundations of a cutting-edge AI platform, where your architectural and performance decisions directly impact real-time AI experiences used by enterprise customers.
This is not an R&D or prototype-heavy role — it’s about shipping, scaling, and operating intelligent systems in production.
🚀 The Role
We’re looking for a Senior Backend Engineer with recent, real-world LLM product experience who combines strong architectural thinking with hands-on development.
You’ll work closely with senior engineering leadership, contributing to technical direction and system design, while remaining deeply involved in day-to-day coding, feature delivery, and production operations.
This is a role for someone who designs systems and builds them.
🛠️ What You’ll Do
Architecture & System Operations (≈50%)
Contribute to architectural decisions and backend system design
Collaborate with infrastructure engineers on:
Platform architecture
Observability (monitoring, logging, alerting)
Deployment and scaling strategies
Optimise system performance:
Latency, throughput, database queries, caching, resource usage
Own production operations:
Troubleshooting, incident response, on-call
Provide technical guidance through:
Code reviews
Design discussions
Backend best practices
Collaborate on real-time and streaming-oriented architectures
Feature Development & Implementation (≈50%)
Build and maintain backend services in production-grade Python (and Go)
Develop conversation-related features:
State management
History
Intelligence improvements
Implement multi-document knowledge bases and retrieval systems
Integrate LLM APIs and build RAG pipelines
Design and maintain APIs and service integrations (gRPC, REST)
Work on core backend services:
Orchestration
Caching
Platform APIs
Own testing, CI/CD pipelines, and deployment automation
🧰 Tech Stack
Python (FastAPI) and Go (gRPC services)
AWS (S3, EC2, Lambda, managed services, LLM platforms)
Docker, Kubernetes
RabbitMQ, Redis
LLM APIs (e.g. OpenAI, managed cloud LLM services)
✅ What We’re Looking For
Must-Have
5+ years of backend engineering experience
Strong background in:
Distributed systems
Microservices
Real-time architectures (WebSockets, gRPC, event-driven systems)
Proven experience building and deploying high-performance Python services
2+ years working with LLM-powered systems in production (2022–2025)
Hands-on use of LLM APIs
Experience building RAG or retrieval-based systems
Comfortable balancing architecture design with hands-on implementation
Strong AWS experience with a focus on:
Performance optimisation
Observability
Production reliability
Demonstrated ability to improve production systems:
Latency
Throughput
Technical debt
Excellent collaboration skills across backend, infrastructure, and AI-focused teams
Bonus Points
Golang experience
Experience with real-time media, streaming, or conversational platforms
Data engineering or ML model-serving infrastructure exposure
🎯 What Success Looks Like
First 6 Months
Ownership of critical backend services established
Knowledge transfer completed
Core conversation and knowledge features live in production
Clear performance and observability improvements delivered
First 12 Months
Backend services and RAG pipelines running reliably at scale
Platform-wide performance improvements delivered
Other engineers supported and unblocked through your technical guidance
Recognised internally as a go-to expert for backend + LLM implementation
💼 What’s On Offer
Compensation & Flexibility
💰 €50,000 – €65,000, depending on experience
🏠 Hybrid work in Barcelona or remote within Europe
Impact & Growth
Ownership of backend systems used daily by enterprise customers
Hands-on technical role with real architectural responsibility
Deep technical challenges across:
LLMs
RAG pipelines
Real-time systems
Scalability and reliability
High-impact role in a small, senior engineering team
Close collaboration with engineering, AI, and infrastructure specialists
Opportunity to deepen expertise in applied AI systems
Additional Perks
Central Barcelona office
Flexible remote working
Lunch support when in the office
Private health insurance
Travel allowance (distance-based)
Flexible benefits under Spanish tax legislation
Wellness / fitness discounts
To apply or learn more: email kris@threesixfive.es with your CV or LinkedIn profile.