Actively recruiting / 86 applicants
We’re here to help you
Florencia Suarez Varady is in direct contact with the company and can answer any questions you may have. Email
Florencia Suarez Varady, RecruiterRole Overview
We are seeking an developer to develop a minimum viable product for AI-powered automated grading system for English assessments using Large Language Models (LLMs). The objective of this project is to create a simple webapp that evaluates and grades a variety of English responses, from multiple-choice questions to written essays, with improved accuracy and consistency over normal LLM api.
This tool will help educators by providing a scalable and efficient method for assessment, ensuring fair and objective grading while enhancing the learning experience through detailed feedback.
Responsibilities
- Develop and implement an AI-powered auto-grading system for English assessments using NLP techniques.
- Fine-tune LLMs with annotated datasets to accurately assess grammatical accuracy, coherence, and argument quality.
- Design the system to provide detailed, actionable feedback to educators and students.
- Ensure the tool is scalable, efficient, and capable of delivering consistent and objective grading.
Required Skills
- Expertise in Natural Language Processing (NLP) and Machine Learning.
- Experience with Large Language Models (LLMs) and their application in assessment tools.
- Strong understanding of English grammar and assessment criteria.
- Proficiency in programming languages commonly used in AI development, such as Python.
Nice to Have
- Experience in educational technology and the development of assessment tools.
- Familiarity with current educational standards and practices in English language assessments.