已申请 实习 应届毕业生 & 经验工作者 最新工作
Data Platform Engineer
- 2024-12-14
-
Singapore, Singapore
Data Platform Engineer
职位详情
Roles & Responsibilities
Tell employers what skills you have
PySpark
Business Intelligence
Scala
Azure
Pipelines
Data Transformation
JavaScript
Data Quality
Microservices
SQL
Python
Continuous Integration
Docker
Java
Data Analytics
Data Warehousing
Responsibilities
As a key contributor to our team, you will help build a robust Data Platform that powers analytics and AI-driven solutions. Your responsibilities include:
- Building and maintaining end-to-end data systems using tools like Python, Scala, or similar.
- Designing, developing, and managing data pipelines for real-time decision-making, reporting, and data collection.
- Implementing processes to ensure data quality, governance, and security.
- Developing ETL/ELT processes and working with structured, semi-structured, and unstructured data.
- Creating and optimizing data models and workflows to support data transformation and storage.
- Leveraging cloud data technologies and services, including Azure, for scalable and efficient data solutions.
- Collaborating with cross-functional teams to produce high-quality, well-tested, and secure code.
- Utilizing tools such as Spark Streaming, Delta Tables, and Databricks for handling large datasets.
- Managing diverse data stores, including data warehouses, RDBMS, and in-memory caches.
- Supporting data-driven decision-making with innovative solutions and advanced analytics.
Qualifications
Required Skills:
- Education: BS/MS in Computer Science, Information Systems, Data Analytics, or equivalent experience.
- At least 2+ years of experience in:Building pipelines and incorporating workflow tools into data system designs.
ETL/ELT, Data Warehousing, or Business Intelligence Development.
Leveraging SQL for data investigations and problem-solving.
Working with cloud platforms and services for data processing and storage.
Handling structured, semi-structured, and unstructured data across diverse storage systems. - Hands-on experience with:Large datasets and tools like Databricks or PySpark.
Storage systems such as Azure Data Lakes or other cloud-native solutions.
Spark Streaming and creating Delta (Live) Tables. - Strong background in design, implementation, and testing for scalable data solutions.
- Proficiency in developing for continuous integration and automated deployments.
- Ability to work in a collaborative environment with excellent communication skills.
Preferred Skills:
- Experience with microservices platforms like Kubernetes, Docker, and Helm Charts.
- Familiarity with event-driven streaming systems (Kafka, Event Hub, Event Grid, Apache Flink).
- Knowledge of advanced tools such as DBT or Data Vault methodologies.
- Experience with Microsoft Fabric or administering BI tools.
- Proficiency in additional programming languages like JavaScript, Java, or C#.
- Ability to guide and mentor other developers, providing technical leadership and expertise.
Tell employers what skills you have
PySpark
Business Intelligence
Scala
Azure
Pipelines
Data Transformation
JavaScript
Data Quality
Microservices
SQL
Python
Continuous Integration
Docker
Java
Data Analytics
Data Warehousing
小心骗局。不要向不明来源提供个人信息或付款。在采取行动之前验证身份。立即举报任何疑似骗局。保持警惕,保持安全。
ALAN PARTNERS SG PTE. LTD.
© Copyright 2025 Agensi Pekerjaan JEV Management Sdn. Bhd., registered in Malaysia (Company No: 201701016948 (1231113-U), EA License No. JTKSM860)
© Copyright 2025 Job Majestic Sdn. Bhd., registered in Malaysia (Company No: 201701037852 (1252023-X))
版权所有