V

Data Engineer - OpenData APAC

Veeva Systems · Singapore · Not Specified

Posted 23 Jan 2026

Quick Summary

  • Design and build reusable and configurable data pipelines using AWS
  • Turn AI/ML proofs of concept into engineered implementations
  • Work with security teams to ensure resources meet security requirements

Full Description

Veeva Systems is a mission-driven organization and pioneer in industry cloud, helping life sciences companies bring therapies to patients faster. As one of the fastest-growing SaaS companies in history, we surpassed $2B in revenue in our last fiscal year with extensive growth potential ahead.

At the heart of Veeva are our values: Do the Right Thing, Customer Success, Employee Success, and Speed. We're not just any public company – we made history in 2021 by becoming a public benefit corporation (PBC), legally bound to balancing the interests of customers, employees, society, and investors.

Join us in transforming the life sciences industry, committed to making a positive impact on its customers, employees, and communities.

The Role

Veeva OpenData Commercial supports the industry by providing reference data across the complete healthcare ecosystem, to support commercial sales execution, compliance, and business analytics. We drive value to our customers through constant innovation, using cloud-based solutions and state-of-the-art technologies to deliver product excellence and customer success.

As data engineer, you will focus on building the next-generation data pipelines and warehouse for OpenData using modern tools, technologies and platforms.
What You’ll Do:
  • Design and build reusable and configurable data pipelines using AWS.
  • Work with data science teammates to turn AI/ML proofs of concept into properly engineered implementations.
  • Improve our current environment with features, refactoring, and innovation
  • Work with other data engineers on designs and code reviews.
  • Work with security teams to ensure that all servers, platforms, and other resources meet security requirements.
  • Requirements:
  • Expert skills in Java or Python
  • Experience developing sophisticated data pipelines in cloud-based environments (e.g. AWS) using scalable data processing tools (e.g. Apache Spark)
  • Demonstrated ability to work with others, particularly providing guidance to other data
  • Ability to communicate around complex ideas and topics in English with both technical and non-technical individuals.
  • Nice to Have:
  • Familiarity with Agile methodologies
  • DevOps skills, especially CI/CD experience
  • Configuring and maintaining cloud-based cluster computing resources and orchestration systems (e.g. EC2 instances, Kubernetes clusters, Elastic Beanstalk)
  • Ready to apply?

    This role is still accepting applications

    Apply on company's site