Get a free C.V. review by sending your C.V. to firstname.lastname@example.org or click the following link. Submit C.V.! use the subject heading REVIEW.
IMPORTANT: Read the application instructions keenly, Never pay for a job interview or application.
Click the Links Below to Get Job Updates.
Marketforce is a fast-growing Africa-based company, building the operating system for retail distribution in Africa. Powered by technology and driven by heart, our mission is to drive Africaforward by creating economic empowerment for everyone along the retail supply chain. Our flagship product, RejaReja helps informal retail merchants buy and sell FMCGs and digital financial services. We are the operating system for retail distribution of consumer goods and financial services. We are a tech and product-led company that gives its engineers ownership and autonomy in everything they do. We thrive off a good challenge, always seeing it as an opportunity to innovate, learn, and grow as individuals, a team, and a company.
Location: Join us in Nairobi, or remotely from wherever you call home.
The Impact You Will Make
We are looking for an experienced Data Engineer to join our growing data team. The hire will be responsible for expanding and optimizing our data and data pipeline architecture (cloud), as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. The Data Engineer will support our software engineers, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives.
You work together with Data Scientists and Data Analysts to build scalable and reliable data solutions (including AI).
You take the ownership of specific components, make them reliable, scalable and help to design the overall architecture with public cloud services.
You like to continuously grow your skills and support your colleagues in improving theirs and you are willing to write and deliver production level code and will to collaborate with various levels of management and technical staff.
- Create and maintain optimal data pipeline architecture (preferably with public clouds such as AWS)
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
- Work with stakeholders including the Executive, Product, Sales and Data teams to assist with data-related technical issues and support their data infrastructure needs.
- Create data tools for analytics and data scientist team members.
- Provide data in a ready-to-use format for data scientists
- Continuously discover, evaluate, and implement new technologies to maximize processes efficiency
- Identify performance bottlenecks and implement optimizations
- Ability to mentor junior and mid level Data engineers
- Education: Bachelors in Computer Science and Engineering or related fields. Preferably Master in the previously mentioned fields.
- Personality: open-minded, reliable and motivated team player who is keen to learn new technologies and share knowledge while being able to work independently
- Experience and Knowledge:4+ years of professional experience in software development, proficiency in some modern programming languages (e.g. Python, Java or Go) and Linux, experience in developing software in AI and/or data centric domains, demonstrated ability to write clean, maintainable and production grade code.
- Experience in scaling applications and services to large data volumes, such as firm knowledge of modern Big Data tools/technologies (e.g.Spark, Parquet, Hadoop).
- Knowledge of relational databases (e.g. PostgreSQL), and modern, scalable storage technologies (e.g. S3), knowledge of metadata management and schema evolution strategies, knowledge of software engineering best practices (e.g. code reviews, CI/CD, testing) and cloud technologies (preferably AWS)
- Experience supporting and working with cross-functional teams in a dynamic environment.
- Experience with AWS cloud services (e.g.) EC2, EMR, RDS, Redshift
You must be logged in to post a comment.