Skip to content

Data Engineer

  • Hybrid
    • Cairo, Al Qāhirah, Egypt
  • Data Engineering

Job description

We are looking for a Data Engineer to join our team in Cairo.

Summary

Raisa is an energy fintech company that uses proprietary technologies (in-house tech, machine learning microservices, securitization) to manage large investments in the United States. With over $2 billion of private funding, Raisa has built a diverse portfolio of oil and gas assets. We are passionate about innovation that leverages our team’s capabilities via proprietary technology and creates extraordinary results for all stakeholders.


We are looking for a highly motivated Data Engineer to join our team at Raisa. As a Data Engineer, you will play a pivotal role in driving our data infrastructure forward, staying a few steps ahead to support teams with efficient pipelines, tools, and scalable data solutions.
You will collaborate with product management, subject matter experts, data science, and software development teams while dealing with complex oil & gas, financial, and geo-spatial datasets. Your work will be essential to ensuring data flows smoothly and meets the needs of the entire organization.


Responsibilities

  • Design, build, and maintain data pipelines and the data warehouse. 
  • Extract and process structured, semi-structured, and unstructured data. 
  • Create dashboards and reports for multiple teams. 
  • Develop and implement complex data models using SQL, Python, and Snowpark. 
  • Manage our cloud data infrastructure on Azure and Snowflake. 
  • Build useful data tools and frameworks for cross-team collaboration. 
  • Mentor other data engineers by providing technical guidance on system and code design. 
  • Communicate with teams to identify data requirements and infrastructure challenges. 

Job requirements


Must have

  • Established hands-on working experience as a data engineer in a professional environment, open to mid and senior-level candidates
  • Expertise in writing and optimizing complex SQL queries
  • Experience with building complex data pipelines using SQL and Python 
  • Familiarity with Azure or other cloud platforms (AWS, GCP)
  • Ability to work with ambiguous data requirements and drive clarity
  • Strong focus on data quality and efficiency in handling large datasets
  • Passion for cloud cost optimization and code efficiency


Nice to have

  • Experience with modern data warehousing concepts and cloud warehouses like Snowflake (we are on Snowflake)
  • Knowledge of MLOps, including deploying and monitoring ML models
  • Familiarity with data pipeline orchestration tools like Airflow, Dagster, or Prefect
  • Experience with machine learning concepts and spatial data analysis
  • Familiarity with serverless computing, Docker, Kubernetes, and containerization
  • Experience designing and building pipelines using Apache Spark/Databricks
  • Experience configuring Spark/Dask clusters

or