As a GenAI QA Engineer, you will ensure the quality and reliability of our RAG-based AI agent platform. Your responsibilities include:
Design and implement automated testing frameworks for RAG pipelines, including:
Vector database performance and accuracy testing
Retrieval quality metrics and relevance scoring
LLM response validation and hallucination detection
End-to-end agent conversation flow testing
Develop specialized test suites for AI/ML components:
Knowledge base ingestion and chunking strategies
Embedding quality and semantic search accuracy
Prompt injection and security vulnerability testing
Multi-modal content handling (documents, tables, images)
Create automated evaluation frameworks for:
Agent response accuracy and consistency
Contextual understanding and reasoning capabilities
Performance benchmarking across different LLMs
A/B testing for prompt engineering optimization
Collaborate with AI engineers to:
Define quality metrics for RAG architectures
Establish ground truth datasets for evaluation
Implement continuous monitoring for model drift
Design test scenarios for edge cases and failure modes
Build testing infrastructure for:
Multi-tenant agent deployments
Knowledge base versioning and rollback testing
API rate limiting and scalability testing
Integration testing with customer systems
Ensure compliance and safety:
Test for bias and fairness in AI responses
Validate data privacy and security measures
Implement guardrails testing for harmful content
Document AI system limitations and failure modes
Develop comprehensive test strategies for RAG-based AI agents.
Create automated benchmarks for retrieval quality and response accuracy.
Design adversarial testing scenarios to identify system vulnerabilities.
Build dashboards for monitoring AI system performance in production.
Collaborate with customers to understand their AI agent requirements.
Contribute to AI safety and alignment best practices.
Required Skills:
Education: Bachelor's degree in Computer Science, Engineering, AI/ML, or related field.
Experience: 5+ years in software testing with at least 2 years focused on AI/ML systems.
AI/ML Testing Expertise:
Experience testing LLM applications, chatbots, or conversational AI
Understanding of RAG architectures and vector databases (Pinecone, Weaviate, Qdrant)
Familiarity with embedding models and similarity search concepts
Knowledge of prompt engineering and LLM evaluation metrics
Technical Skills:
Proficiency in Python for test automation and AI/ML frameworks
Experience with LLM frameworks (LangChain, LlamaIndex, Haystack)
API testing for RESTful services and streaming endpoints
Familiarity with ML testing tools (MLflow, Weights & Biases, Neptune)
Automation Frameworks:
pytest, unittest for Python-based testing
Experience with async testing for streaming responses
Load testing tools for AI endpoints (Locust, K6)
CI/CD integration with model deployment pipelines
Domain Knowledge:
Understanding of NLP concepts and evaluation metrics (BLEU, ROUGE, BERTScore)
Knowledge of information retrieval metrics (precision, recall, MRR)
Familiarity with financial services use cases for AI agents
Understanding of responsible AI principles
Preferred Qualifications:
Experience with cloud AI services (AWS Bedrock, Azure OpenAI, Google Vertex AI)
Knowledge of vector database optimization and indexing strategies
Familiarity with fine-tuning and model evaluation workflows
Experience with multilingual AI systems testing
Understanding of regulatory requirements for AI in financial services (EU AI Act, GDPR)
Contributions to open-source AI/ML testing frameworks
Les avantages à nous rejoindre :