A

Full Stack Engineer - A26037

Activate Interactive · Singapore · Not Specified

Posted yesterday

Quick Summary

  • Develop and maintain front-end and back-end components of Gen AI applications.
  • Implement cloud-native solutions using AWS, GCP, and Azure AI/ML services.
  • Design and execute tests to validate LLM accuracy and identify hallucinations.

Full Description

Activate Interactive Pte Ltd (“Activate”) is a leading technology consultancy headquartered in Singapore with a presence in Malaysia and Indonesia. Our clients are empowered with quality, cost-effective, and impactful end-to-end application development, like mobile and web applications, and cloud technology that remove technology roadblocks and increase their business efficiency.

We believe in positively impacting the lives of people around us and the environment we live in through the use of technology. Hence, we are committed to providing a conducive environment for all employees to realise their full potential, who in turn have the opportunity to continuously drive innovation.

We are searching for our next team members to join our growing team.

If you love the idea of being part of a growing company with exciting prospects in mobile and web technologies that create positive impact on people’s lives, then we would love to hear from you.

What will you do?

We are seeking a talented and versatile Full Stack Engineer to join our Gen AI team. In this role, you will be responsible for developing and maintaining both front-end and back-end components of our Gen AI applications, including RAG chatbots, agentic AI systems, and film classification platforms. Your expertise in full stack development, cloud-based AI services, and quality assurance will be crucial in bringing cutting-edge AI technologies—including autonomous AI agents—to life through intuitive and efficient web applications.

As a Full Stack Engineer (Gen AI), you will initially support two key projects:

AI Assistant Platform

  • Built on AWS Bedrock with multi-cloud integration (GCP and Azure)
  • Supports Gemini and OpenAI models
  • Frontend development using TypeScript and Next.js
  • Multi-cloud architecture requiring seamless integration across AWS, GCP, and Azure services

Film Classification System

  • Built on GCP's Agentic framework
  • Utilises Gemini models with multi-agent architecture
  • Backend development using Python
  • Advanced agentic AI workflows for automated film content analysis

Full Stack Development: Design, develop, and maintain both front-end and back-end components of Gen AI applications, ensuring seamless integration between user interfaces and LLM-powered backends.

Cloud-Native Development: Architect and implement cloud-native solutions leveraging AWS, GCP, and Azure services, with particular focus on AI/ML services across these platforms.

LLM Integration: Implement APIs and services to integrate Large Language Models (including AWS Bedrock, Azure OpenAI, and GCP's Gemini) into web applications, focusing on efficient data flow and real-time processing of model outputs.

Agentic AI Development: Build and maintain autonomous AI agent systems capable of multi-step reasoning, planning, and decision-making. Implement agent orchestration frameworks that enable AI agents to use tools, access external APIs, and execute complex workflows.

Agent Tool Integration: Develop and integrate tool-calling capabilities for AI agents, enabling them to interact with external systems, databases, and APIs to accomplish user-defined goals autonomously.

User Interface Design: Create intuitive and responsive user interfaces for Gen AI applications, with a focus on enhancing user experience in chatbot interactions, agentic AI workflows, and film classification interfaces.

Database Management: Design and maintain databases to store and retrieve data efficiently for Gen AI applications, including user interactions, model outputs, agent execution logs, and system state management.

Performance Optimisation: Optimise application performance, focusing on reducing latency in LLM-powered features, agent execution times, and ensuring smooth user experiences even under high loads.

Security Implementation: Implement robust security measures to protect sensitive data and ensure compliance with data protection regulations, particularly for AI-driven applications handling user inputs and autonomous agent actions.

DevOps and Deployment: Participate in CI/CD pipeline setup and maintenance, ensuring smooth deployment of Gen AI applications across different cloud environments.

Documentation: Maintain comprehensive documentation for codebases, APIs, agent workflows, and system architectures to facilitate knowledge sharing and future development.

Testing & Quality Assurance Responsibilities:

LLM Testing & Validation: Design and execute comprehensive test cases to evaluate the accuracy, reliability, and performance of LLMs integrated into Gen AI applications. Verify that model responses are relevant, contextually appropriate, and factually correct.

Hallucination Identification: Focus on detecting hallucinations where the model produces false or fabricated information, ensuring these are promptly identified and addressed. Help refine models to reduce these occurrences.

Accuracy & Quality Assurance: Assess the accuracy of model outputs, especially in high-precision contexts like chatbot conversations or film classification, ensuring that LLMs produce responses that are both relevant and correct according to predefined business logic.

Test Automation for LLMs: Implement automated testing for common use cases, edge cases, and regression tests, especially focusing on cases that tend to trigger hallucinations or inaccuracies in the model's responses.

Functional & Non-Functional Testing: Evaluate the LLM's functionality in different scenarios to check if it meets functional requirements. Perform non-functional testing like performance, load, and stress tests to assess the scalability of LLMs when handling high loads or multiple queries.

Testing and Debugging: Develop and execute unit tests, integration tests, and end-to-end tests for both front-end and back-end components, with a focus on identifying and resolving issues related to LLM integrations, agent behaviour, hallucinations, and inaccurate outputs.

Bug Reporting & Issue Resolution: Identify and document bugs related to hallucinations, inaccurate outputs, or unexpected model behaviours. Work closely with data scientists and developers to resolve issues and refine models.

Regression Testing: Ensure that model updates, fine-tuning, or new training data do not introduce regressions or increase hallucinations and inaccuracies. Perform retesting of fixed issues and reassess model accuracy after updates.

Ready to apply?

This role is still accepting applications

Apply on company's site