Company Description
OvationMR offers online research solutions, leveraging platforms such as @EthOS Research Hub for mobile ethnography, diary studies, product testing, and innovation exploration. Our combination of experienced professionals, robust technology, and scientific methods provides accurate and reliable data and insights.
Full-time position based in the United States with a salary & bonus commensurate with skills, demonstrable ability to perform successfully the required objectives of the role, past job experience, and availability.
We’re building next‑generation applications for the insights industry that blend qualitative depth with quantitative scale. You’ll design and ship production systems that (1) power dynamic conversational chat experiences for qualitative research, capable of context‑aware probing and follow‑ups inside survey workflows, and (2) deliver vector‑database research hubs with natural‑language chatbot front‑ends for exploring transcripts, open‑ends, and other unstructured data.
What you’ll build (types of applications)
Conversational research assistants that ask smart, contextual follow‑ups to open‑ended responses, adapt tone for different respondent personas, and integrate with standard survey flows.
Qual+Quant bridges that use text analytics, embeddings, and lightweight rules to connect conversational prompts with quantitative questions, enabling richer insights without adding friction for respondents.
Research hubs powered by vector search and retrieval‑augmented generation (RAG) to surface themes across interviews, open‑ends, documents, and knowledge bases—through a secure, natural‑language chatbot UI.
Admin tools for researchers to configure conversation styles, triggers (e.g., sentiment, keywords, topics), and approval workflows—plus dashboards for quality, safety, and performance.
Responsibilities
Own end‑to‑end development of web services and UIs that deliver reliable, low‑latency conversational experiences in survey contexts.
Implement LLM‑powered pipelines (prompting, tool/function calling, structured JSON outputs) with robust fallbacks, evaluation, and guardrails.
Design and operate vector‑search and RAG stacks (embeddings, chunking, indexing, retrieval policies) for multi‑format research content.
Ship secure APIs and event flows that integrate with major survey and research platforms (generic connectors, webhooks, and SDKs).
Build researcher‑facing configuration UIs (personas, lexicons, rules) and playgrounds for rapid iteration and QA.
Instrument everything: latency budgets, safety/PII metrics, cost controls, A/B experiments, and content quality evaluations.
Contribute to engineering standards, code reviews, CI/CD, observability, and documentation.
What will help you succeed (must‑haves)
5+ years building production web apps/services with TypeScript/Node or Python and a modern web framework (React/Next.js or similar).
Strong cloud experience (AWS preferred) across API development, serverless/containers, data stores, and observability.
Hands‑on with LLM orchestration (prompt design, tool/function calling, schema‑constrained outputs) and safety/PII handling.
Practical knowledge of embeddings and vector databases (e.g., OpenSearch, Pinecone, Weaviate, FAISS) and how to tune retrieval for RAG.
Solid grasp of text analytics (language detection, sentiment, keyword/entity extraction) and combining rules with model outputs.
Track record delivering low‑latency user experiences and debugging production systems with real‑time traffic.
Security & privacy mindset: encryption, secrets management, least privilege, GDPR/CCPA, and ISO 27001-compliant basics.
Nice‑to‑haves
Experience integrating with survey or research platforms and working around client‑side runtime constraints.
Knowledge of experimentation and evaluation for AI features (offline eval sets, human‑in‑the‑loop review, A/B testing).
Multi‑tenant SaaS patterns, role‑based access control, and usage metering.
Familiarity with OpenSearch Serverless, DynamoDB single‑table design, or event‑driven architectures.
Background in insights, UX research, or CX analytics.
Our (typical) tech stack
We’re pragmatic and tool‑agnostic; this reflects common choices rather than hard requirements.
Backend: Node/TypeScript or Python; REST/GraphQL; serverless or containers on AWS
AI: Major cloud LLM services; embeddings + vector search; RAG pipelines with evaluation and guardrails
Frontend: React/Next.js, TypeScript, modern form/validation, and data‑fetching libraries
Data/Infra: Managed databases, object storage, CI/CD, logging/metrics/tracing
What success looks like (first 90 days)
Equal Opportunity
We value diversity and are committed to creating an inclusive environment for all employees.
How to apply
Please share links to relevant projects (GitHub, demos) and a brief note on an AI feature you shipped, how you evaluated quality, and ensured safety.
Contact: info@ovationmr.com to send resumé. All serious enquiries will be answered.