| Madhav Sai Avanigadda - AI Engineer |
| [email protected] |
| Location: Irving, Texas, USA |
| Relocation: Yes |
| Visa: Green Card |
| Resume file: Madhav,SR AI-ML Engineer_1775488727941.docx Please check the file(s) for viruses. Files are checked manually and then made available for download. |
|
Senior AI / GenAI Engineer with 11+ years of experience designing and delivering production-grade machine learning systems, LLM-powered applications, and distributed backend platforms across multiple enterprise domains.
Built strong expertise in Python engineering, ETL pipelines, and data engineering systems, expanding into full-stack capabilities across backend services, scalable architectures, and enterprise-grade artificial intelligence solutions. Proven experience in architecting end-to-end AI systems, covering data ingestion, transformation, feature engineering, model development, deployment, monitoring, and optimization within large-scale production environments. Specialized in Generative AI systems, including LLMs, RAG pipelines, vector databases, and semantic retrieval techniques enabling contextual reasoning and enterprise knowledge discovery across structured and unstructured datasets. Experienced in building LLM-powered applications using LangChain, LangGraph, Vertex AI, AWS Bedrock, and Claude models, supporting scalable orchestration, multi-model integration, and agentic AI workflows. Designed scalable backend architectures using REST APIs, microservices, and event-driven systems, supporting machine learning platforms, real-time data pipelines, and distributed AI workloads across enterprise environments. Developed and deployed machine learning and deep learning models using PyTorch, TensorFlow, and scikit-learn, supporting fraud detection, demand forecasting, risk modeling, and clinical decision support applications. Engineered scalable data pipelines using PySpark, SQL, and cloud-native tools, enabling efficient processing, transformation, and management of large-scale structured and unstructured datasets across distributed environments. Designed optimized data workflows covering ingestion, transformation, and feature engineering, ensuring data quality, consistency, reliability, and readiness for both batch processing systems and real-time analytical pipelines. Architected and deployed cloud-native AI platforms across AWS, Azure, and GCP, leveraging containerization, orchestration, and distributed systems for scalable, resilient, and high-performance production workloads. Implemented comprehensive MLOps frameworks using MLflow, CI/CD pipelines, and containerized deployments, enabling model versioning, automated releases, monitoring, and lifecycle management across production AI systems. Built robust real-time streaming systems using Kafka, Kinesis, and event-driven architectures, enabling low-latency data processing, continuous model inference, and scalable handling of high-volume transactional data pipelines. Applied expertise in statistical modeling, experimentation, and feature engineering, enabling rigorous model validation, performance optimization, and generation of actionable insights for data-driven business decision-making processes. Designed modular and reusable system architectures using object-oriented programming and distributed design principles, improving scalability, maintainability, extensibility, and reducing technical debt across AI-driven application platforms. Hands-on experience with LLMOps, AI Agents, RAG evaluation frameworks, prompt engineering, and cost-efficient deployment strategies for large language models in production enterprise environments. Experienced in optimizing LLM performance through latency reduction, cost-efficient inference strategies, and model evaluation techniques, ensuring scalable, reliable, and production-ready deployment of generative AI systems across enterprise environments. Keywords: continuous integration continuous deployment artificial intelligence Keywords: continuous integration continuous deployment artificial intelligence |