{"data":{"jobs":{"edges":[{"node":{"frontmatter":{"title":"Senior Data Engineer","company":"SkywardOps","location":"Remote, Nigeria","range":"May 2025 – Present","url":""},"html":"<ul>\n<li>Designed and deployed Apache Iceberg table infrastructure using AWS Glue Catalog and Terraform, enabling ACID transactions and schema evolution across multi-petabyte datasets with zero-copy cloning capabilities for analytics workloads.</li>\n<li>Built external Iceberg tables in Snowflake connected to S3 data lake, providing unified query access across warehouse and lakehouse while reducing storage costs by 40% through separation of compute and storage.</li>\n<li>Architected near real-time data pipelines using AWS Glue streaming ETL jobs integrated with SQS, SNS, and Lambda for event-driven processing, achieving sub-minute data freshness for business-critical dashboards.</li>\n<li>Created stored procedures for data quality validation, schema drift detection, and metadata-driven ETL orchestration, improving pipeline reliability and reducing data incident response time.</li>\n<li>Ensured pipeline reliability by monitoring Jenkins, managing vulnerabilities, and setting up alerts to reduce downtime and meet SLAs.</li>\n<li>Built and deployed a production AI personal finance advisor agent using Mastra and LangChain, with REST API authentication, CI/CD via GitHub Actions, and observability through Mastra Studio.</li>\n</ul>"}},{"node":{"frontmatter":{"title":"Data Infrastructure Engineer","company":"Sabino LLC","location":"Remote, Nigeria","range":"May 2024 – Apr 2025","url":""},"html":"<ul>\n<li>Designed automated ETL pipelines using AWS Glue, Python, and dbt with Glue Crawlers for schema discovery\nacross 15+ data sources, reducing processing time by 60% and orchestrating transformations into Snowflake.</li>\n<li>Built monitoring infrastructure with Prometheus, Grafana, and Loki for data pipelines, achieving 99.8% uptime\nacross distributed systems.</li>\n<li>Engineered containerized data platform on AWS and OVH bare metal infrastructure using Docker, scaling data\nprocessing capacity with 40% cost optimization through efficient resource management.</li>\n<li>Managed mission-critical data infrastructure across Linux environments with CI/CD automation, maintaining 90%\nsystem availability and reducing deployment time from hours to minutes.</li>\n<li>Implemented proactive data observability solutions with custom Python monitoring frameworks, building multiple\nGrafana dashboards to track key metrics and integrating Slack alerts, decreasing mean time to resolution (MTTR)\nfor data incidents by 75%</li>\n</ul>"}},{"node":{"frontmatter":{"title":"Data Engineer","company":"Upwork (Freelance)","location":"Remote, Worldwide","range":"Jan 2022 – Present","url":"https://www.upwork.com/"},"html":"<ul>\n<li>Delivered custom Python scripts to automate data extraction, processing, and delivery for 15+ clients, enabling\nbusinesses to meet critical deadlines and improve data-driven decision-making capabilities.</li>\n<li>Resolved data pipeline bottlenecks by re-architecting data flows utilizing Apache Kafka and Apache Spark,\nexpediting data delivery to customers by 40% and enhancing user satisfaction scores.</li>\n<li>Optimized SQL queries to enhance data extraction and processing speeds, boosting system performance.</li>\n<li>Developed and optimized Data Pipelines with tools like Airbite, Airflow and DBT for ETL processes, ensuring\nscalable, automated data workflows.</li>\n<li>Created insightful data visualizations in Grafana &#x26; Google looker to support data-driven decision-making,\nmonitoring key metrics and system performance.</li>\n</ul>"}},{"node":{"frontmatter":{"title":"Data Engineer","company":"Newswise","location":"Remote, US","range":"Mar 2022 – Apr 2024","url":"https://www.newswise.com/"},"html":"<ul>\n<li>Architected robust ETL pipelines that process data from 50+ news sources with 89% accuracy, delivering data\nstreams to critical internal systems and applications.</li>\n<li>Established and refined data ingestion pipelines that processed a high volume of news articles daily, significantly\naccelerating data retrieval by 40% for news feed applications.</li>\n<li>Successfully reduced ETL process runtime by 30% through optimization and parallel processing techniques.</li>\n<li>Enhanced data quality and consistency by implementing automated validation checks, resulting in a 25% reduction\nin data errors.</li>\n<li>Key role in scaling pipeline for 50% data volume increase, ensuring smooth ingestion and processing.</li>\n</ul>"}},{"node":{"frontmatter":{"title":"Software Engineer (Data)","company":"Ocassio LTD","location":"Remote, Nigeria","range":"Nov 2021 – Dec 2023","url":""},"html":"<ul>\n<li>Standardized data exchange across 15+ services by architecting RESTful APIs using FastAPI, enhancing system\ninteroperability and enabling real-time data synchronization capabilities.</li>\n<li>Streamlined data workflows, automating processing, transformation, and integration across systems, while\noptimizing information visualization through diverse source ingestion.</li>\n<li>Collaborated with ML &#x26; Analysts team, using AWS Redshift for data warehousing and DBT for modeling,\nachieving 15% better decision-making and 25% fewer errors.</li>\n</ul>"}},{"node":{"frontmatter":{"title":"Machine Learning Engineer / Mentor","company":"Technocolabs Softwares","location":"Remote, India","range":"Feb 2022 – Mar 2022","url":"https://technocolabs.com/"},"html":"<ul>\n<li>Delivered expert technical guidance to machine learning and data science interns, led cross-team collaboration on\nsoftware solutions, and managed distributed teams, achieving high success rates and reducing deployment errors.</li>\n<li>Developed a data pipeline to collect and preprocess music data, leveraging spectral analysis and unsupervised\nlearning to enhance dataset quality, reduce dimensionality, and improve feature extraction</li>\n</ul>"}},{"node":{"frontmatter":{"title":"Machine Learning Engineer Intern","company":"Technocolabs Softwares","location":"Remote, India","range":"Nov 2021 – Feb 2022","url":"https://technocolabs.com/"},"html":"<ul>\n<li>Built a data pipeline with Python and APIs to collect and preprocess audio data from a music platform.</li>\n<li>Applied spectral analysis to enhance dataset quality and size by 70%.</li>\n<li>Used unsupervised learning to detect repeated patterns, reducing dimensionality while keeping key features.</li>\n<li>Collaborated on developing and evaluating ML models, including a neural network, for optimal performance.</li>\n</ul>"}}]}}}