Company Description
Echoleads.ai leverages AI-powered sales agents to engage, qualify, and convert leads through real-time voice conversations. Our voice bots act as scalable sales representatives, making thousands of smart, human-like calls daily to follow up instantly, ask the right questions, and book appointments effortlessly. Echoleads integrates seamlessly with lead sources like Meta Ads, Google Ads, and CRMs, ensuring leads are never missed. Serving modern sales and marketing teams across various industries, our AI agents proficiently handle outreach, lead qualification, and appointment setting.
About the Role:
We are seeking a highly experienced Voice AI /ML Engineer to lead the design and deployment of real-time voice intelligence systems. This role focuses on ASR, TTS, speaker diarization, wake word detection, and building production-grade modular audio processing pipelines to power next-generation contact center solutions, intelligent voice agents, and telecom-grade audio systems.
You will work at the intersection of deep learning, streaming infrastructure, and speech/NLP technology, creating scalable, low-latency systems across diverse audio formats and real-world applications.
Key Responsibilities:
Voice & Audio Intelligence:
- Build, fine-tune, and deploy ASR models (e.g., Whisper, wav2vec2.0, Conformer) for real-time transcription.
- Develop and finetune high-quality TTS systems using VITS, Tacotron, FastSpeech for lifelike voice generation and cloning.
- Implement speaker diarization for segmenting and identifying speakers in multi-party conversations using embeddings (x-vectors/d-vectors) and clustering (AHC, VBx, spectral clustering).
- Design robust wake word detection models with ultra-low latency and high accuracy in noisy conditions.
Real-Time Audio Streaming & Voice Agent Infrastructure:
- Architect bi-directional real-time audio streaming pipelines using WebSocket, gRPC, Twilio Media Streams, or WebRTC.
- Integrate voice AI models into live voice agent solutions, IVR automation, and AI contact center platforms.
- Optimize for latency, concurrency, and continuous audio streaming with context buffering and voice activity detection (VAD).
- Build scalable microservices to process, decode, encode, and stream audio across common codecs (e.g., PCM, Opus, μ-law, AAC, MP3) and containers (e.g., WAV, MP4).
Deep Learning & NLP Architecture:
- Utilize transformers, encoder-decoder models, GANs, VAEs, and diffusion models, for speech and language tasks.
- Implement end-to-end pipelines including text normalization, G2P mapping, NLP intent extraction, and emotion/prosody control.
- Fine-tune pre-trained language models for integration with voice-based user interfaces.
Modular System Development:
- Build reusable, plug-and-play modules for ASR, TTS, diarization, codecs, streaming inference, and data augmentation.
- Design APIs and interfaces for orchestrating voice tasks across multi-stage pipelines with format conversions and buffering.
- Develop performance benchmarks and optimize for CPU/GPU, memory footprint, and real-time constraints.
Engineering & Deployment:
- Writing robust, modular, and efficient Python code
- Experience with Docker, Kubernetes, cloud deployment (AWS, Azure, GCP)
- Optimize models for real-time inference using ONNX, TorchScript, and CUDA, including quantization, context-aware inference, model caching.
- On device voice model deployment.