Â
Data Engineer, Tech Writer
I am a Data Engineer, I help startup to build End-to-end Data pipelines.
Â
Â
đź« About Mona
Never wanted to be an engineer but when life gives you project idea then build a startup. My Engineering Journey start in 2018. I worked with four startups till now and i am a Co-founder of the Mobile-web.dev community.
🕵️ Work
Â
Â
Ryan-miranda Partners Data Engineer Mar 2021-Present
Â
Having experience in Data Warehousing and Managing the Data.
- Build/optimize data pipelines from several sources, including Google AdWords, Facebook Ads, TikTok, Shopify, Google Analytics, BingAds, Klaviyo and Recharge.
- Spearhead organization’s transition from paid ETL service Stitch to open-source platform Meltano.
- Create models and transform data using SQL and DBT; write, debug, and test ETL processes for new or existing data pipelines.
- Build Source Extractor API from scratch for Grin, Facebook, Shopify-Users and Linkedin.
- Orchestrate automation of data workflow using Airflow and MWAA.
- Build CI/CD pipeline for the development.
- Dockerize the React+Flask App.
- Set up Disaster recovery Region for the Application in AWS.
Â
Tywn Artificial Intelligence Engineer July 2020-April 2021
Â
I worked with the image data and my work revolves around Image Acquisition, Annotation with labelling, Training and testing of a model with different deep learning architectures like YOLO, TFOD, etc.
Â
Â
twimbit Data Scientist Jan 2020-Aug 2020
Â
Worked on Research project on NLP Based system:
- Implementing Topic modelling Algorithms like LDA
- Build question answer API by Bert Model for user query.
- Implementation of real-time sentiments analysis of documents with pre-trained models.
- Build a web crawler API (using flask) that scrap data from multiple websites.
- Worked on a team to build a chrome extension with Auth0 integration.
- Worked on ETL pipeline using apache airflow.
- Deploy machine learning models and APIs on AWS Instance.
Â
Â
Â
Kritrim Junior Data Scientist Sep 2019-Feb 2020
Â
Here i worked on a AutoML product.
- Basic understanding of big data tools like pyspark, ApacheBeam, Hadoop and Hive.
- Help building a clean code.
- Support machine learning and statistics based end-to-end data science software, and identify business requirements, develop use cases.
Â
Â
BlogsPortray🦄ConnectWork BooksÂ
Â
Â
Â