Home

Pavan - Python Developer, Dallas TX, open for onsite
[email protected]
Location: Dallas, Texas, USA
Relocation: Yes
Visa: H1B
Resume file: Pavan K Python_1776269388444.docx
Please check the file(s) for viruses. Files are checked manually and then made available for download.
Pavan K
Email: [email protected]
+1 972-924-5835 (Employer)
Professional Summary
Python Full Stack Developer with 7+ years of experience in building enterprise web applications, cloud-native backend services, AI-enabled platforms, and scalable data pipelines using Python, FastAPI, Django, Flask, React, TypeScript, Redux, and AWS.
Proven expertise in REST API development, microservices, GraphQL integrations, distributed systems, and full-stack dashboard development for analytics, workflow automation, and enterprise business applications.
Hands-on experience in AI/LLM application development, including RAG pipelines, hybrid retrieval, metadata-aware search, prompt orchestration, and source-grounded response generation using Amazon Bedrock, OpenSearch, S3, Lambda, and SQS.
Strong experience building event-driven ETL and manifest-based ingestion pipelines using AWS Glue, PySpark, Step Functions, Lambda, and S3, delivering curated datasets for reporting, analytics, feature engineering, and ML-enabled applications.
Skilled in supporting MLOps and predictive analytics workflows by preparing feature-ready data, validating model inputs/outputs, and integrating model-driven insights into backend APIs and enterprise platforms.
Solid background in PostgreSQL, Redshift, Oracle, MySQL, and SQL Server, with expertise in SQL optimization, indexing, data modeling, relational access patterns, and transactional processing for high-performance enterprise systems.
Experienced in implementing secure and production-ready systems using JWT, OAuth 2.0, RBAC, audit logging, CloudWatch monitoring, PyTest, Docker, Jenkins, GitHub Actions, and CI/CD pipelines, with strong focus on reliability, observability, and maintainable software delivery.
Strong team player with experience collaborating across engineering, analytics, QA, platform, and business teams to deliver scalable, secure, and high-quality enterprise solutions in Agile environments.
Familiar with LangChain and LangGraph for designing agentic workflows, retrieval-augmented generation pipelines, multi-step reasoning flows, tool-integrated AI applications, and enterprise LLM orchestration patterns.

Experience
Senior Python Full Stack Developer
BNY Mellon, NC | 01/2025 Present
Developed cloud-native full-stack applications using Python, FastAPI, React, TypeScript, and Redux, building scalable internal platforms, AI-enabled workflows, and business dashboards for enterprise users.
Designed and delivered React/TypeScript dashboard modules with reusable components, advanced search/filter workflows, source-linked AI responses, follow-up suggestions, and API-driven visualizations for operational and knowledge discovery use cases.
Engineered AWS-based RAG pipelines using Amazon Bedrock, Amazon OpenSearch Service, and S3, implementing query preprocessing, hybrid retrieval, context assembly, prompt orchestration, and structured response generation for grounded enterprise question-answering.
Built end-to-end AI search request flows covering request validation, query normalization, acronym expansion, metadata-aware retrieval, LLM invocation, response parsing, and source-cited answer delivery for internal business applications.
Developed manifest-driven ingestion pipelines using Amazon S3, SQS, Lambda, and FastAPI, processing S3 manifest files and large document batches with idempotent event handling, retry-safe consumption, and asynchronous indexing into OpenSearch.
Implemented event-driven document processing workflows using S3 notifications SQS Lambda, enabling scalable ingestion, chunk generation, metadata enrichment, and vector-ready content preparation for enterprise AI search platforms.
Built secure backend services for authentication, authorization, request tracing, audit logging, correlation IDs, and standardized error handling, improving traceability and production support across AI and non-AI APIs.
Implemented RBAC and OAuth 2.0/JWT-based access control for protected enterprise APIs, including role validation, endpoint guards, token checks, and audit-ready request metadata for secure access to AI-powered features.
Designed and implemented REST APIs and GraphQL integrations with versioned endpoints, schema-based validation, idempotent request handling, and standardized response contracts to support reliable frontend and service-to-service communication.
Integrated backend services with PostgreSQL using SQLAlchemy and psycopg2, optimizing relational data access, query patterns, transaction handling, and high-throughput persistence for platform metadata, feedback, and operational logging.
Built metadata-driven AI platform components to manage document chunking, embedding configuration, retrieval settings, prompt templates, source attribution, and feedback capture, improving answer relevance, explainability, and maintainability.
Developed PyTest-based unit and integration test suites for FastAPI services, retrieval components, and event-driven pipelines, strengthening CI/CD quality gates and reducing production defects.
Supported CI/CD and release workflows using Git, GitLab pipelines, Docker, and AWS deployment processes, collaborating with Agile teams to ship production-ready features, bug fixes, and platform enhancements.
Implemented production observability using Amazon CloudWatch with structured logs, metrics, alerts, and latency/error tracking for API performance, OpenSearch retrieval, SQS retries, and Bedrock response behavior.
Contributed to architecture decisions, technology selection, and platform roadmap planning for scalable microservices, distributed systems, enterprise AI search, and dashboard-driven internal applications.
Python Full Stack Developer (AWS)
Fidelity Investments Boston, MA | 02/2024 12/2024

Contributed to Advisor Insights Data Hub, an internal analytics platform supporting Wealthscape Intelligence, digital onboarding, and AI/ML-driven advisor analytics by consolidating advisor, client, account, and trading activity data for intelligent reporting and decision support.
Built and deployed FastAPI microservices on AWS ECS Fargate, exposing RESTful APIs to deliver advisor KPIs, onboarding metrics, behavioral insights, and ML-ready analytics data to internal dashboard and reporting applications.
Developed scalable ETL and MLOps-supporting data pipelines using AWS Glue, PySpark, AWS Step Functions, and AWS Lambda to process raw Amazon S3 datasets into curated Parquet data products for downstream analytics, feature engineering, batch scoring, and ML model consumption.
Supported MLOps workflows by preparing high-quality feature-ready datasets, enforcing schema validation and data quality checks, and improving the reliability of data pipelines used by predictive analytics and internal ML-enabled applications.
Implemented event-driven ingestion pipelines using Amazon S3, SQS, and DLQ, with idempotency controls, retry/backoff logic, and failure isolation to ensure replay-safe and resilient processing of enterprise data feeding analytics and ML workflows.
Collaborated with analytics and platform teams to streamline model operationalization by enabling secure backend access to curated datasets, integrating model output data into downstream APIs, and supporting reliable delivery of model-derived insights to internal consumers.
Designed and optimized SQL queries, aggregations, joins, and indexing strategies across Aurora PostgreSQL and Amazon Redshift, improving query performance, data retrieval efficiency, and low-latency access for analytics and ML-supporting workloads.
Built data validation, governance controls, and SLA checks across onboarding and reporting pipelines, improving data accuracy, reducing manual reconciliation, and strengthening trust in datasets used for business intelligence and machine learning workflows.
Enhanced observability and monitoring using Amazon CloudWatch, structured logging, and SNS alerts, providing better visibility into ETL jobs, API health, pipeline failures, and ML workflow dependencies.
Automated CI/CD workflows using Jenkins and GitHub Actions for test automation, container builds, and deployment of FastAPI microservices, cloud-native data pipelines, and ML-supporting backend services.
Partnered with platform, analytics, and data governance teams to implement secure IAM policies, access controls, and compliant cloud data access patterns across Glue, Lambda, S3, Redshift, Aurora PostgreSQL, and internal analytics services.

Full-Stack Developer| IBM, India| 08/2020 12/2022
Built secure RESTful APIs using Python, Django, and Django REST Framework (DRF) for enterprise web applications, leveraging serializers, viewsets, request validation, and standardized error handling.
Developed backend services for business logic, data processing, API orchestration, and system integrations across internal enterprise platforms.
Designed and maintained React-based UI modules for dashboards, form workflows, and API-driven interfaces, supporting responsive and data-centric user interactions.
Implemented authentication and authorization using JWT, OAuth 2.0, and RBAC, securing application endpoints with permission-based access controls.
Integrated Python services with Salesforce and internal enterprise systems for workflow tracking, status synchronization, reporting, and cross-system data exchange.
Added input validation, exception handling, retry mechanisms, and consistent response contracts to improve API reliability, maintainability, and fault tolerance.
Used Python, pandas, and NumPy to clean, transform, and preprocess structured datasets for analytics, reporting, and ML-enabled application workflows.
Supported ML workflows involving Logistic Regression, Random Forest, Decision Trees, and XGBoost for structured enterprise data use cases such as classification, scoring, and prediction support.
Performed feature engineering, missing value handling, categorical encoding, input normalization, and dataset standardization to prepare model-ready data.
Supported ML operations by preparing training and inference datasets, validating prediction outputs, and integrating model inference results into backend services and application workflows.
Assisted in exposing ML prediction workflows through Python APIs, enabling downstream enterprise applications to consume model outputs in operational processes.
Containerized backend services using Docker and supported deployment activities in AWS-based environments, including environment setup, runtime configuration, and release support.
Assisted with CI/CD pipelines, build validation, deployment support, monitoring, and production issue resolution for backend and ML-enabled applications.
Collaborated with frontend, QA, analytics, and business teams in Agile/Scrum delivery cycles to implement features, resolve defects, and support releases.
Maintained technical documentation for APIs, integrations, ML workflows, deployment steps, and backend service architecture to improve supportability and team onboarding.

Software Developer
TCS Bangalore, India | 05/2018 07/2020
Developed Python-based backend services and REST APIs for enterprise applications.
Built backend modules using Flask, Python, and SQL for business logic and database interaction.
Wrote and optimized SQL queries for Oracle, MySQL, and SQL Server databases.
Developed Python scripts for data extraction, transformation, and loading into SQL databases.
Supported legacy data migration using SQL validations, reconciliation checks, and transformation logic.
Automated support and maintenance tasks using Python, Bash, and database scripts.
Assisted with deployments, monitoring, and production support for backend applications.
Collaborated with cross-functional teams to resolve API and database integration issues.
Used Git and Agile practices for sprint-based development and delivery.
Maintained technical documentation for APIs, SQL workflows, and backend processes.

Education
B. Tech in Information Technology from Vasavi College of Engineering, Hyderabad, Graduated May 2018.

Master s in Data Science at SJSU, CA, USA Jan 2023 - DEC 2024


Technical Skills
Programming Languages Python (3.x), Bash, SQL, Groovy, JavaScript (ES6+), TypeScript
Frameworks & Libraries Flask, Django, FastAPI, LangGraph, Pandas, NumPy, PySpark,
LangChain, PyTorch, TensorFlow, Hugging Face Transformers, Matplotlib, Seaborn, React.js, Next.js, HTML5, CSS3.
Cloud & DevOps AWS (Lambda, ECS, EC2,SQS,SNS, S3, RDS, API Gateway, Step Functions, Glue, Athena, CloudWatch, DynamoDB), Docker, Kubernetes, Helm, Terraform, Jenkins, GitHub Actions, AWS CDK, Elastic Beanstalk
Data Engineering & ETL Tools Apache Airflow, AWS Glue, Kafka, Kinesis, Snowflake, AWS
DataSync, Apache Spark
Monitoring & Observability Prometheus, Grafana, OpenTelemetry, Splunk, ELK Stack
(Elasticsearch, Logstash, Kibana)
Authentication & Security OAuth 2.0, JWT, IAM (AWS Identity and Access Management),
HashiCorp Vault, Qualys
Databases PostgreSQL, MySQL, MS SQL Server, Oracle, Redis, MongoDB, CouchDB, Snowflake
API Development & Integration RESTful APIs, GraphQL, Django REST Framework, FastAPI, gRPC, Swagger/OpenAPI
Version Control & CI/CD Git, GitHub, Bitbucket, Jenkins, GitHub Actions, GitLab CI/CD, Groovy scripting
Containerization & Deployment Docker, Kubernetes, AWS ECS, Helm, Elastic Beanstalk
Tools & Platforms JIRA, Git, GitHub, Bitbucket, Tableau, Sphinx, Postman, Cypress, Selenium
Markup & Data Formats JSON, XML, YAML
Others Apache Spark, LangChain, Athena, Glue Catalog, PyTest, pytest- mock, Linux Shell Scripting, Strong analytical and problem-solving
Keywords: continuous integration continuous deployment quality analyst artificial intelligence machine learning user interface javascript sthree microsoft mississippi California Massachusetts North Carolina

To remove this resume please click here or send an email from [email protected] to [email protected] with subject as "delete" (without inverted commas)
[email protected];7181
Enter the captcha code and we will send and email at [email protected]
with a link to edit / delete this resume
Captcha Image: