Senior Data Engineer

  • FTE
  • Research & Innovation
  • Lagos

Who We Are

Seamfix Limited is on a quest within the next 9 more years (in line with our 10-year strategic objectives) to deliver value to 1 billion end customers, empower 10 thousand businesses, and build 1 thousand leaders. 


In Seamfix, we are extremely aware that there are endless possibilities if we can be one united people who speak the same creative language, create with the same image or picture of success, and work towards the same end goal hence, we are looking for one who is a team player, who will resonate deeply with our vision, speaks the same creative language and desires same or even bigger impact. 


We help organizations acquire and service a lot of customers digitally by seamlessly automating their onboarding and service delivery processes so that they can be more productive, make their customers happy, and boost their revenues. Our identity and essence are solving problems in a very seamless manner in line with our name; Seamfix is coined from Seamless fixing of problems.


Responsibilities

·  Designing, constructing, and maintaining data pipelines and ETL processes.

·  Developing and managing databases and data warehouses.

·  Ensuring data quality, integrity, and security.

· Collaborating with data analysts and data scientists to enable efficient data access.

·  Optimizing and tuning data systems for performance and scalability.



PII DATA PROCESSOR RESPONSIBILITIES

· Design, create, and implement IT processes and systems that would enable the data controller to gather personal data.

· Use tools and strategies to gather personal data.

· Implement security measures that would safeguard personal data.

· Store personal data gathered by the data controller.

· Transfer PII data from the data controller to another organization and vice versa.


Qualification

· Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field.

· Proficiency in programming languages such as Python, Java, or Scala.

· Experience with data modeling and database design principles.

· Knowledge of ETL (Extract, Transform, Load) processes and tools.

· Familiarity with big data technologies like Hadoop, Spark, or Kafka.

 

 TECHNICAL SKILLS AND COMPETENCE

· Database Management: Proficiency in working with relational databases (e.g., SQL), NoSQL databases (e.g., MongoDB), and data warehousing solutions (e.g., Redshift).

· ETL (Extract, Transform, Load): Strong knowledge of ETL processes to extract data from various sources, transform it into a suitable format, and load it into a data warehouse.

· Big Data Technologies: Familiarity with big data tools and frameworks such as Hadoop, Spark, and Hive for processing and analyzing large datasets.

· Data Integration: Expertise in integrating data from multiple sources, including APIs, streaming data, and batch processing.

· Data Modeling: Skill in designing and implementing data models, including dimensional modeling for data warehousing.

· Data Pipeline Automation: Experience in automating data pipelines using tools like Apache Airflow or similar workflow management systems.

· Data Quality Assurance: Knowledge of data quality best practices and tools to ensure data accuracy and consistency.

· Cloud Computing: Proficiency in cloud platforms like AWS, Azure, or Google Cloud, and the ability to deploy and manage data infrastructure in the cloud.

· Version Control: Familiarity with version control systems like Git for tracking changes to data pipelines and infrastructure as code.

· Programming Languages: Proficiency in languages like Python, Java, or Scala for data engineering tasks.

· Security and Compliance: Understanding of data security and compliance standards, especially in industries with sensitive data.