Home

Deepika Chavali - Data engineer
[email protected]
Location: Charlotte, North Carolina, USA
Relocation: any
Visa: F1-Stem OPT
PROFESSIONAL SUMMARY
DEEPIKA CHAVALI
[email protected] | (980)-781-8700 | Charlotte, NC
Data Engineer with 4+ years of experience delivering enterprise and government-facing data platforms on AWS. Strong expertise in Aurora
PostgreSQL, SQL stored procedures, and cloud-native ETL workflows using S3, Glue, and Lambda. Hands-on experience integrating Java-based
REST APIs, supporting disaster recovery and high-availability strategies, and operating in long-term, regulated environments. Proven ability to
build reliable, auditable, and scalable data solutions aligned with enterprise and government standards.
SKILLS
Databases & Warehousing: Aurora PostgreSQL, PostgreSQL, SQL Stored Procedures, Snowflake (basic), SQL Server (exposure)
AWS & Cloud: Amazon S3, AWS Glue, AWS Lambda, IAM, Multi-AZ Architecture, Backup & Restore, Disaster Recovery
Data Engineering: ETL / ELT Pipelines, SQL Optimization, Python, PySpark, Spark SQL, Batch & Streaming Processing
Backend & Integration: Java (REST API integration), JSON, Service-to-Service Data Exchange
Orchestration & DevOps: Apache Airflow, Terraform (IaC), GitHub Actions (CI/CD), Monitoring & Alerting
CORE COMPETENCIES
Large-Scale Data Warehousing (Redshift, Snowflake)
SQL Data Modeling (Star & Snowflake Schemas)
AWS-Native ETL / ELT (Glue, Lambda, EMR)
Batch & Streaming Pipelines (Kinesis, MSK)
Ad-Hoc Analytics & Self-Service Reporting
PROFESSIONAL EXPERIENCE
Data Engineer | Jul 2024 Present
Acer America
Data Quality, Reliability & Observability
Performance & Cost Optimization
Infrastructure as Code & CI/CD Automation
Cross-Functional Stakeholder Collaboration
Designed, developed, and maintained AWS-based ETL/ELT pipelines using S3, Glue, and Lambda to support enterprise analytics
platforms.
Worked extensively with Aurora PostgreSQL, developing and optimizing SQL stored procedures for ingestion, transformation, and
downstream analytics.
Integrated data pipelines with Java-based REST APIs, enabling secure and reliable data exchange between backend systems.
Implemented data validation, reconciliation, and error-handling logic to ensure production data accuracy and compliance.
Supported disaster recovery and high-availability practices, including backup validation, failover testing, and reprocessing strategies.
Collaborated with data analysts, backend engineers, and business stakeholders to deliver scalable, auditable data solutions.
Monitored and troubleshot production workflows, resolving pipeline failures and performance issues to meet SLAs.
Graduate Research Assistant | Aug 2023 Dec 2023
University of North Carolina at Charlotte
Built cloud-based ETL workflows using Python, SQL, and Airflow, simulating enterprise-grade data platforms.
Designed and validated PostgreSQL and Snowflake (basic) schemas to support analytical workloads.
Integrated streaming and batch datasets into relational stores, applying data quality and consistency checks.
Supported infrastructure automation and CI/CD pipelines to ensure repeatable, compliant deployments.
Data Analyst | Mar 2020 - Jun 2022
Cooper Standard
Developed and optimized SQL queries and stored procedures to support enterprise reporting and analytics.
Supported data warehouse optimization through validation of schemas, indexes, and query performance.
Partnered with business and technology teams to support data integration and quality assurance efforts.
Assisted with validation of backup datasets and historical reloads to support data recovery processes.
Data Analyst | July 2019 - Feb 2020
Mphasis
Assisted in building SQL-based transformation pipelines to clean, validate, and prepare large datasets for analytics use cases.
Performed exploratory analysis and automated recurring reports, improving data quality by 30% and reducing manual effort by 40%.
Supported cross-functional teams by delivering consistent, analytics-ready datasets for reporting and insights.
PROJECTS
Enterprise Analytics & BI Data Platform (AWS)
Designed and operated AWS-native ETL/ELT pipelines integrating batch and streaming data from heterogeneous sources into Redshift,
enabling ad-hoc SQL analytics and BI reporting for internal stakeholders.
SQL-First Analytics & Reporting Pipelines
Built and maintained SQL-driven data transformation pipelines to produce analytics-ready datasets supporting KPI tracking, trend analysis,
and executive dashboards with reduced refresh latency.
Near-Real-Time Streaming & Monitoring Pipelines
Implemented low-latency ingestion pipelines using Kinesis/MSK and Spark Streaming to process high-volume telemetry data and deliver
timely insights for monitoring and analytics use cases.
Data Quality & Reliability Automation Framework
Developed automated data quality validation and monitoring workflows to enforce schema consistency, detect anomalies, and improve trust
in downstream analytics and reporting systems.
EDUCATION & CERTIFICATIONS
M.S., Data Science & Business Analytics - UNC Charlotte (2024)
Artificial Intelligence - University of Taiwan (2023)
B.Tech - SRM University (2022)
AWS Certified Data Engineer - Associate (2025)
Azure Data Engineer - Associate (2024)
AWS, Redshift, S3, Glue, EMR, Lambda, Kinesis, Firehose, MSK, SQL, Python, PySpark, Spark SQL, ETL, ELT, Data Warehouse, Jarvis, Data
Modeling, Star/Snowflake Schema, Large Datasets, Data Pipelines, Data Integration, Reporting Solutions, Ad-hoc Querying, BI Analytics, Data
Lake, Airflow, Step Functions, Terraform, CI/CD, Distributed Systems, Marketing Analytics, Big Data, Automation, IAM
Keywords: continuous integration continuous deployment business intelligence sthree active directory Arizona North Carolina

To remove this resume please click here or send an email from [email protected] to [email protected] with subject as "delete" (without inverted commas)
[email protected];6593
Enter the captcha code and we will send and email at [email protected]
with a link to edit / delete this resume
Captcha Image: