About Us:
Upscrape is a fast-growing data automation and web scraping company building advanced scraping systems, custom data pipelines, and API-driven solutions for enterprise clients. We work on complex real-world challenges that require precision, scale, and expertise. As we continue to grow, we are looking to bring on an experienced developer to join our core technical team.
Position Overview:
We’re hiring a full‑time Software Engineer (Python / Full‑Stack) with strong experience in web applications, RESTful API development, and web scraping/browser automation. The ideal candidate has built production‑level systems, from front‑end interfaces to backend services, understands anti‑bot protections, and can independently own end‑to‑end data extraction and delivery workflows. This is a highly technical role perfect for someone who thrives on solving complex problems and shipping meaningful features that make an immediate impact.
Key Responsibilities:
- Design, build, and maintain web applications (front‑end + back‑end) using Flask, FastAPI or Django paired with modern JavaScript frameworks.
- Develop and document RESTful APIs to serve and manage data.
- Implement and maintain web scraping/browser‑automation pipelines for dynamic, protected sites (Playwright, Selenium, Puppeteer), as part of broader data workflows.
- Architect and operate proxy management, IP rotation, and anti‑blocking solutions.
- Ensure high reliability with robust error handling, retry logic, monitoring, and scalability.
- Collaborate closely with the founder and cross‑functional team to define requirements, estimate tasks, and deliver client projects on time.
Required Experience & Skills:
- 1+ years building production‑level web applications or scraping systems.
- Python proficiency with frameworks/libraries: Flask or FastAPI (preferred), Django, Requests, Async/Aiohttp, Scrapy.
- Front‑end fundamentals: HTML/CSS, JavaScript, and experience integrating with React or Next.js.
- API development: design, documentation (OpenAPI/Swagger), versioning.
- Database experience: PostgreSQL, MongoDB, or similar.
- DevOps skills: Docker, Git, CI/CD, Linux environments.
- Strong debugging, optimization, and problem‑solving abilities.
- Clear, consistent communication and a collaborative mindset.
Bonus (Nice to Have):
- Experience with AI‑powered data tooling (LLMs, OCR, GPT‑4, Cursor, Claude Code).
- Familiarity with large‑scale architectures handling millions of records.
- Prior work in SaaS, productized data services, or cloud platforms (AWS, GCP, Azure).
The Right Fit:
We’re looking for someone who is:
- Self‑driven – takes full ownership from design through deployment.
- Execution‑oriented – really fast at shipping clean, maintainable code.
- Experienced – knows what they’re doing and can mentor others.
- Outcome‑focused – prioritizes working systems and client impact over theory.
- Detail‑oriented – writes clean code, follows best practices, and documents thoroughly.
What We Offer:
- 100% remote, full‑time position.
- Stable, long‑term role with clear growth paths.
- Direct, efficient communication.
- Opportunity to work on high‑impact, cutting‑edge projects.
- Competitive compensation: ₹6 LPA – ₹12 LPA, based on skills and experience.
How to Apply (Important Filter):
In your application, please include:
- Links or code samples of web apps, APIs, or scraping projects you’ve built
- Your preferred tools and libraries, and why
- A brief overview of your approach to extracting data from highly dynamic websites
- A note on your experience with AI tools (e.g., Cursor, Claude Code) and your typical development turnaround time
We look forward to seeing how you can help us build the next generation of data products!