Data Engineer (Contract)
Sector:
Technology
Function:
Contact Name:
Aviral Bhargava
Expiry Date:
13-Feb-2026
Job Ref:
Date Published:
14-Jan-2026
Role Overview:
This position sits within the technology transformation function of a multinational organisation, supporting enterprise-wide reporting, analytics, and modern data architecture initiatives. The role focuses on designing and operationalising automated data ingestion, transformation pipelines, and governed cloud-based data layers to enable business intelligence, process transparency, and AI-driven insights. It requires close collaboration with product, analytics, and portfolio teams to deliver scalable data solutions within a structured cloud environment.
Key Responsibilities:
- Build and maintain automated ingestion pipelines from workflow tools and other enterprise data sources into cloud-based storage and analytics platforms.
- Develop and tune pipelines supporting batch and near-real-time loads, including incremental ingestion from relational databases.
- Design layered data architecture (raw to curated to consumption) and create models optimised for reporting, analytics, and downstream semantic layers.
- Implement data quality checks, monitoring mechanisms, and remediation workflows covering completeness, consistency, timeliness, and lineage tracking.
- Apply metadata governance, cataloguing, lifecycle rules, and policy enforcement using cloud-native governance tooling.
- Develop and validate transformation logic using SQL/Python, incorporating unit/integration testing and CI/CD deployment patterns.
- Collaborate with product owners and BI teams to translate reporting needs into data contracts, datasets, and model structures suitable for analytical tools.
- Produce technical documentation including schema definitions, runbooks, SLO/SLA expectations, and re-usable standards for future data products.
Key Requirements:
Must-Have:
- Minimum 5 years’ experience building production-grade cloud data pipelines.
- Proven hands-on expertise with Google Cloud data services (e.g., BigQuery, Dataplex, Dataflow).
- Strong SQL capabilities, including performance tuning, stored procedures, and CDC patterns across RDBMS.
- Demonstrated experience integrating workflow platform APIs and processing structured files from enterprise repositories.
- Solid understanding of dimensional modelling, lakehouse patterns, and governed data architecture frameworks.
- Experience in data governance, quality validation rules, and monitoring frameworks.
- Familiarity with BI consumption patterns and semantic layer development.
Nice-to-Have:
- Exposure to insurance or financial services data domains.
- Working knowledge of Power BI modelling and DAX optimisation considerations.
If this role aligns with your experience and career goals, please send your application to AviralBhargava@argyllscott.sg.
Argyll Scott Asia is acting as an Employment Business in relation to this vacancy.
Share this job
Sign up for Job alerts
Get similar jobs like these by email