- Bachelor's degree or higher in Computer Science, Information Technology, Engineering, Mathematics, or a related field.
- Proven experience (3+ years) working as a data engineer, software engineer, or a related role, preferably in the finance or banking industry.
- Strong proficiency in SQL and other programming languages such as Python, Java, or Scala, etc., with the ability to write efficient, clean, and maintainable code.
- Solid understanding of database systems, data warehousing concepts, and proficiency in SQL databases like Oracle, PostgreSQL, MySQL, SQL Server,...
- Knowledge of data modeling principles and experience designing efficient data schemas for relational and non-relational databases.
- Strong analytical and problem-solving skills with the ability to troubleshoot and resolve complex data-related issues.
- Excellent communication and collaboration skills with the ability to work effectively in cross-functional teams and communicate technical concepts to non-technical stakeholders.
- Experience with version control systems such as Git and proficiency in Linux/Unix environments.
- Continuous learning mindset with a willingness to stay updated on emerging technologies and best practices in data engineering and analytics.
- Experience with big data technologies and frameworks such as Hadoop, Spark, Kafka, or Flink for large-scale data processing and analytics.
- Experience designing and implementing scalable data pipelines and ETL processes using tools like Apache Airflow, Apache NiFi, or similar.
- Familiarity with cloud platforms such as AWS, Google Cloud Platform, or Azure, and proficiency in services like AWS Glue, EMR, Redshift, or equivalent.
- Expertise in data visualization tools such as Tableau, Power BI, or matplotlib/seaborn for creating insightful visualizations and reports to communicate findings to stakeholders.