Artificial Intelligence, Technical Referent - AI Lab
Artificial Intelligence, Technical Referent - AI Lab
Requisitos
What Skills Do I Need?
-
8+ years in software engineering; 3+ years working hands-on with LLMs and AI tooling.
-
Strong experience with distributed systems and event-driven architectures, both synchronous and asynchronous.
-
Proficiency with LangChain, LangGraph or similar orchestration frameworks, including custom tools and multi-step workflows.
-
Solid knowledge of AWS infrastructure and how to run evaluation workloads in a secure, cost-aware way.
-
Track record designing and running benchmarks comparing AI models and tools under real constraints.
-
Able to turn ambiguous "we should try this new thing" ideas into well-scoped evaluation plans with clear hypotheses and metrics.
-
Comfortable making trade-off calls — quality vs. latency vs. cost vs. vendor lock-in — and documenting them clearly.
-
Experience writing short, opinionated decision memos that help others move fast.
-
Can explain technical results to non-specialists in concrete, concise terms.
-
Experience working with platform, product and operations teams to align evaluations with real use cases.
-
Able to influence without authority, aligning teams around shared standards and guardrails.
-
Curious and biased toward experimentation, combined with disciplined measurement and risk awareness.
-
Comfortable in a small, high-leverage team without embedded PMs. You structure your own work and keep stakeholders informed.
-
Builder attitude: you prefer reusable tools, templates and playbooks over one-off work.
Technical depth
Evaluation & decision-making
Collaboration & communication
Mindset
Anuncio original
You'll join the AI Lab, a team whose mission is to validate high-value emerging AI and automation technologies and de-risk their adoption across dLocal. This is a rare opportunity to work at the frontier of applied AI in fintech: running rigorous experiments on the latest models and tools, and turning results into decisions that shape how a global payments company like dLocal adopts AI.
As Technical Referent, you are the technical referent for technology scouting and evaluation within dLocal. You will run instrumented spikes and benchmarking on emerging AI technologies, produce clear recommendations for internal teams (could be business, legal, IT, etc.) at dLocal, and coordinate hand-offs to the teams that take validated technologies into production.
You will partner closely with engineering teams responsible for taking your proof of concepts into production (either domain teams or platformization teams): covering platform infrastructure, enablement programs, IT automations, and knowledge systems. Your core focus is evaluation and recommendation, not long-term ownership of production systems.
What Will I Be Doing?
-
Run short, instrumented spikes and benchmarking on new models, tools and frameworks: LLMs, vector databases, orchestration frameworks, copilots, assistants and more.
-
Compare vendor and open-source options, documenting trade-offs across quality, cost, latency, security and integration complexity.
-
Deliver concise decision memos with clear recommendations: adopt, watch, or avoid.
-
Design and maintain evaluation environments (i.e. datasets, prompts, scenarios, telemetry) to test models under realistic constraints.
-
Build automation and tooling to measure quality, robustness, latency and cost, including regression tracking over time.
-
Ensure every evaluated technology has benchmark coverage and a documented risk and limitations view.
-
For promising technologies, produce readiness playbooks describing recommended patterns, guardrails and integration guidelines.
-
Coordinate with platformization teams to turn validated technologies into platformized capabilities.
-
Track which validated items progress to platformization or pilots, and capture learnings to sharpen future bets.
-
Work with Security, Legal, Compliance and other AI teams to document risk assessments, mitigations and governance recommendations for each evaluated technology.
-
Maintain checklists, decision templates and lightweight standards reusable across evaluations and by partner teams.
-
Incorporate learnings from third-party AI tooling already in use (e.g., external copilots, AWS AI suite) into adoption guidelines.
-
-
Partner with other AI teams and domain teams to ensure clear boundaries and smooth collaboration.
-
Participate in hiring as a technical evaluator and culture champion.
-
Mentor engineers in the Lab and adjacent teams on evaluation methods, benchmarking and experimental design.
-
Share knowledge through internal write-ups, tech talks and occasional external meetups and conferences.
-
Technology Scouting & Evaluation
Evaluation Harnesses & Sandboxes
Readiness Playbooks & Hand-offs
Governance, Risk & Standards
Collaboration, Mentoring & Community
Candidatura gestionada por dLocal