Home

Dinesh - Senior python developer
[email protected]
Location: Dallas, Texas, USA
Relocation: Yes
Visa: GC
Resume file: Dinesh Senior Python_Developer M_1766183075335.docx
Please check the file(s) for viruses. Files are checked manually and then made available for download.
Around 10 years of experience as a Python Developer and coding with analytical programming using Python, Django, C++, XML, CSS, HTML5/CSS3, JavaScript and jQuery.
Experienced in Object Oriented programming (OOP) concepts using Python and Java.
Experience in Designing the Data models for OLTP/OLAP database system.
Good understanding of NoSQL databases such as MongoDB, Datastax, Redis and Apache Cassandra.
Web development using Python and advanced Django-based web framework.
Developed RESTful APIs using Python frameworks (Django REST Framework/Flask) and consumed them in JavaScript applications.
Experienced in caching large scale applications using Memcache, Redis.
Strong knowledge on working with GraphQL schema, queries and mutations to interact with Mongo DB and several other data layers.
Leveraged AI-assisted development tools (GitHub Copilot, AWS CodeWhisperer) to accelerate backend API and ETL module creation, improving code quality, maintainability, and delivery speed.
Experience in integrating ServiceNow with Python and REST APIs to automate incident management and service request workflows.
Designed and implemented AI-driven personalization features in the Nudge platform using NER, RAG, and NLP models to tailor employee engagement nudges.
Integrated LangChain-based pipelines with AWS services to enable chatbot interactions, contextual search, and semantic retrieval across HR and payroll data.
Strong domain expertise in Capital Markets with hands-on experience in front-, middle-, and back-office systems for trading, risk, and regulatory reporting.
Worked on Interest Rate Swaps, Bonds, FX, and Repo instruments, implementing risk & P&L attribution models and trade lifecycle automation.
Experienced in AI-assisted development implemented AI-driven personalization (NER, RAG, NLP) and LangChain chatbot pipelines integrated with AWS for contextual retrieval and automation.
Solid understanding of risk management, valuation (DV01, PV, VaR), and regulatory frameworks such as Volcker Rule, MiFID II, and Dodd-Frank.
Designed Python-based automation and reporting tools for trade surveillance, compliance, and regulatory submissions in Capital Market environments.
Collaborated with financial analysts, quants, and compliance teams to align trading data systems with risk, P&L, and audit requirements.
Collaborated with frontend teams to integrate FastAPI APIs with React and Angular frameworks, enabling seamless data exchange between frontend and backend systems.
Configured NoSQL databases like Apache Cassandra and Mongo DB to increase compatibility with Django.
Developed custom activities and tasks in Step Functions to automate business processes and data pipelines.
Designed and implemented database schemas for both PostgreSQL and MSSQL, ensuring data integrity and optimizing query performance.
Developed modules by applying Object Oriented Programming (OOP) techniques using Polymorphism, Encapsulation and Inheritance.
Utilized JavaScript's asynchronous programming features (Promises, async/await) to handle asynchronous operations and improve application performance.
Extensive knowledge in RDBMS (MySQL, Oracle) & Big Data Databases.
Expertise in working with different databases like MySQL, PostgreSQL, Oracle and very good knowledge in NoSQL databases MongoDB, Redis.
Web development using Python and advanced Django-based web framework.
Independently integrate multiple APIs and new features using the React + GraphQL stack.
Experience in configuring auto scalable and highly available Microservices set with monitoring and logging using AWS, Docker, Jenkins, and Splunk.
Strong knowledge of SQL concepts - CRUD operations and aggregation framework.
Experience in the design of MongoDB database - Indexing and Sharding.
Extensive knowledge in Marketing and SCM Domains like GitHub, Code Commit, and Bitbucket.
Skilled in XML, HTML, DHTML, and Ajax, with experience using Tomcat and Apache servers on UNIX, Linux, and Windows.

Academic Qualification

Bachelor s in computer science engineering- National institute of engineering, Mysore 2011-2015

TECHNICAL SKILLS

Programming & Scripting Languages: Python, C, C++, JAVA, HTML, CSS, Java Script
Databases: MS-Access, Oracle 12c/11g/10g/9i, Teradata, big data, Hadoop, PostgreSQL, DynamoDB, Oracle, Cassandra, Redis.
ETL/BI Tools: Informatica PowerCenter 9. X, Tableau, Cognos BI 10, MS Excel, SAS, SAS/Macro, SAS/SQL
Data Modeling: Erwin r 9.6, 9.5, 9.1, 8.x, Rational Rose, ER/Studio, MS Visio, SAP Power designer.
Web Packages: Google Analytics, Adobe Test & Target, Web Trends
Bigdata Ecosystem: HDFS, PIG, MapReduce, HIVE, SQOOP, FLUME, HBase, Storm, Kafka, Elastic Search, Redis, Flume, Storm, Kafka, Elastic Search, Redis, Flume, Scoop, ETL.
Statistical Methods: Time Series, regression models, splines, confidence intervals, principal component analysis and Dimensionality Reduction, bootstrapping
Cloud: AWS, S3, EC2, Sage Maker, Athena, AWS Redshift.
Big Data / Grid Technologies: Cassandra, Coherence, Mongo DB, Zookeeper, Titan, Elastic Search, Storm, Kafka, Hadoop

PROFESSIONAL EXPERIENCE

Client: ADP(Automatic Data Processing), Remote Feb 2025 - Present
Role: SR. Python Developer

Designed, built, and maintained scalable backend systems for ADP s data-driven platforms, supporting high-performance ingestion and partitioning of HR, payroll, and compliance data.
Developed end-to-end data pipelines using PySpark, AWS Glue, and Databricks, ensuring efficient ETL workflows for AI/ML and analytics workloads.
Engineered distributed Python-based microservices to handle real-time data ingestion and transformation across multiple regions.
Implemented data partitioning strategies in Spark and Snowflake to optimize large-scale query performance and reduce compute costs.
Built production-grade ETL frameworks supporting ingestion of structured, semi-structured, and streaming data.
Wrote clean, modular, and well-tested Python and Spark code, ensuring maintainability and deployment consistency.
Integrated AI/ML preprocessing pipelines with S3 and SageMaker for LLM fine-tuning datasets and model input preparation.
Orchestrated data pipelines using AWS Step Functions and Airflow, automating ingestion, transformation, and validation.
Collaborated with ML teams to design feature stores and training data pipelines supporting NLP and recommendation models.
Leveraged AWS Lambda, EventBridge, and Redshift for scalable data ingestion, event processing, and analytics.
Deployed containerized Spark jobs using Docker and Kubernetes to handle multi-terabyte batch and stream workloads.
Implemented robust unit and integration tests for Python and Spark modules, ensuring consistent production reliability.
Applied data quality and validation frameworks to maintain consistency across ingestion and transformation stages.
Designed schema evolution and partition management logic for handling multi-tenant AI data pipelines.
Developed backend APIs in FastAPI and Flask to expose processed datasets to downstream ML and analytics services.
Automated CI/CD pipelines via Jenkins and GitHub Actions for continuous testing, linting, and deployment.
Implemented Redis caching to accelerate query responses from Python microservices and pipeline checkpoints.
Monitored Spark jobs and Python APIs using CloudWatch, Grafana, and Prometheus, improving observability.
Collaborated with data scientists to operationalize AI/ML models, managing inference pipelines and versioned datasets.
Conducted load and performance testing for ingestion workflows to ensure low-latency and fault-tolerant data flow.
Enforced data governance and lineage tracking for AI pipelines using AWS Glue Data Catalog and Lake Formation.
Documented data ingestion and Spark workflow standards for team-wide consistency and code reuse.
Environment: Python, PySpark, Databricks, AWS (Lambda, Glue, Redshift, RDS, S3, EventBridge, SageMaker), FastAPI, Snowflake, Terraform, Docker, Jenkins, Airflow, Redis, CloudWatch, Grafana, Kubernetes, Agile/Scrum

Client: Charles Schwab Oct 2022 Jan 2025
Role: Sr. Python Developer
Responsibilities:

Designed and implemented scalable backend systems for regulatory and risk analytics platforms using Python and Spark.
Built high-performance data ingestion and partitioning frameworks to process trade, compliance, and financial datasets.
Developed Spark-based ETL pipelines for real-time and batch ingestion from multiple trading and audit systems.
Wrote efficient Python and PySpark modules for transformation, aggregation, and anomaly detection in large datasets.
Supported AI/ML data preparation pipelines for model training on trade surveillance and risk prediction workloads.
Architected microservices and APIs using Flask, FastAPI, and Django to serve processed and enriched datasets.
Integrated Kafka and Spark Streaming for low-latency event processing and risk data feeds.
Designed data partitioning and bucketing strategies in Spark to optimize distributed joins and reduce shuffle overhead.
Implemented unit tests and Pytest frameworks for all Python modules and Spark transformations.
Automated ETL job orchestration using AWS Glue and Airflow, improving transparency and reliability.
Established data validation and reconciliation frameworks ensuring pipeline accuracy and audit readiness.
Applied Spark SQL and DataFrame APIs to construct reusable transformation layers for AI-ready datasets.
Configured multi-region data replication and partition pruning to improve ingestion and analytics latency.
Developed service-layer caching and data prefetching logic using Redis to improve API responsiveness.
Enhanced backend observability with Grafana, CloudWatch, and Splunk dashboards for Spark and Python services.
Collaborated with ML engineers to define feature extraction and data labeling pipelines supporting AI model retraining.
Optimized Spark job parallelism and cluster configurations for cost-effective distributed processing.
Delivered data ingestion and orchestration frameworks supporting large-scale compliance and risk reporting.
Applied best practices for CI/CD, version control, and code quality, improving deployment stability.
Partnered with DevOps to containerize Spark workloads for reproducible testing and deployment environments.
Refactored existing ETL codebases to enhance maintainability and reduce processing time by 40%.
Participated in Agile sprints, reviews, and backlog refinements to align delivery with business and compliance goals.

Environment: Python, PySpark, AWS (Glue, Redshift, Lambda, ECS, RDS, S3), Kafka, FastAPI, Django, Flask, Airflow, Redis, Terraform, Docker, Kubernetes, Jenkins, CloudWatch, Grafana, Snowflake, Spark SQL, Agile/Scrum

Client: Meijer, Grand Rapids, MI Oct 2019- Sep 2022
Role: SR. Python Developer
Responsibilities:
Designed and developed scalable backend services for retail data processing, vendor management, and logistics systems.
Built and maintained Spark-based ETL pipelines for high-volume data ingestion from POS, supplier, and warehouse feeds.
Implemented data partitioning strategies in Spark and Snowflake to enhance parallelism and analytical performance.
Developed Python and PySpark workflows for data transformation, cleansing, and aggregation supporting analytics and ML workloads.
Integrated ETL pipelines with AWS Glue, S3, and Redshift to create unified data lakes for AI/ML readiness.
Wrote modular, test-driven Python code for backend microservices supporting data ingestion and transformation APIs.
Collaborated with data scientists to prepare and validate AI/ML training datasets for demand forecasting and recommendation models.
Implemented streaming ingestion using Kafka and Spark Structured Streaming for near-real-time data synchronization.
Designed data models and schemas optimized for partitioned access patterns across PostgreSQL and Snowflake.
Automated ingestion workflows using Airflow DAGs, ensuring timely and consistent data pipeline execution.
Applied unit, integration, and data validation tests to maintain pipeline accuracy and fault tolerance.
Containerized Python and Spark applications using Docker and deployed via Kubernetes for production scalability.
Improved data pipeline throughput by 35% by fine-tuning Spark configurations and optimizing partition sizes.
Integrated Redis caching and optimized Spark jobs for high-performance querying and precomputation.
Implemented data lineage tracking and governance across ETL and ingestion workflows.
Created monitoring dashboards in Grafana and CloudWatch to visualize pipeline performance and system health.
Enhanced CI/CD workflows with Jenkins for automated Spark job builds and production rollouts.
Collaborated with ML teams to operationalize predictive models using Spark inference pipelines.
Developed backend REST APIs for downstream analytics and dashboard integrations.
Conducted load and regression testing of data pipelines to ensure production reliability under scale.
Delivered comprehensive documentation of Spark jobs, schema logic, and orchestration workflows.
Environment: Python, PySpark, AWS (Glue, Lambda, S3, Redshift, ECS), Kafka, Airflow, Snowflake, PostgreSQL, Terraform, Docker, Jenkins, Redis, Grafana, CloudWatch, Kubernetes, FastAPI, Agile/Scrum
Client: JP Morgan and Chase, NYC, NY Jan 2019 Sep 2019
Role: Python Developer
Responsibilities:
Gathered requirements, performed system analysis, design, development, testing, and deployment.
Developed tools using Python, Shell scripting, and XML to automate manual data and system management tasks.
Designed and maintained databases using Python and developed a RESTful API service using Flask, SQLAlchemy, MongoDB, Redis, and PostgreSQL.
Integrated Step Functions with CI/CD pipelines using AWS CodePipeline and CodeBuild to automate deployments and updates of state machines.
Applied machine learning algorithms (Scikit-learn, TensorFlow) to extract insights from market and operational data.
Developed Microservices by creating REST APIs to access data from multiple market sources and gather network traffic data.
Built a monitoring application that captured error data and stored it in centralized databases.
Utilized FastAPI to build secure and efficient RESTful APIs for capital market data retrieval and reporting, aligning with industry standards for fixed-income securities.
Deployed FastAPI applications with Docker and Kubernetes for high availability.
Integrated AWS Step Functions with Lambda, S3, DynamoDB, SNS, SQS, and API Gateway for full-stack data processing pipelines.
Developed and maintained Python-based tools and APIs for Capital Market data retrieval, transformation, and risk reporting.
Built RESTful APIs using Flask and FastAPI for fixed income and FX trade processing aligned with industry compliance standards.
Designed microservices for market data ingestion and trade lifecycle automation across multiple asset classes.
Developed valuation models to calculate P&L, PV, and risk measures (VaR, Delta, DV01) for Interest Rate Swaps, Bonds, and FX products.
Integrated FastAPI-based trade reporting services with compliance frameworks to meet MiFID II and Volcker Rule requirements.
Automated reconciliation of positions between trading, risk, and finance systems to ensure accuracy in daily P&L and exposure reports.
Developed pipelines to aggregate and cleanse trade data for regulatory reporting (Volcker, MiFID II) and capital adequacy assessments.
Partnered with front-office developers and quants to validate financial models and ensure accurate pricing and market data integration.

Environment: Python, Spring Boot, Kafka, JSON, GitHub, Linux, Django, Flask, FastAPI, Jenkins, Unix, HTML, CSS, JSON, RESTful Web Services, JavaScript, PyCharm, Spyder, Serverless Framework, MongoDB, PostgreSQL, MySQL, AWS, Spark, Jenkins, Eclipse, CloudWatch, GIT, Kubernetes, Docker

Client: Orion Private Limited, India Mar 2015 Dec 2018
Python Developer
Responsibilities:
Gathered requirements, system analysis, design, development, testing, and deployment.
Developed user interface using CSS, HTML, JavaScript, and jQuery.
Created a database using MySQL and wrote several queries to extract/store data.
Worked on Ingestion Process of geospatial data using ETL techniques.
Designed and implemented a dedicated MYSQL database server to drive the web apps and report on daily progress.
Developed views and templates with Python and Django & view controller and templating language to create a user-friendly website interface.
Analyzed the SQL scripts and designed solutions to implement using PySpark.
Used Django framework for application development.
Created the entire application using Python, Django, MySQL, and Linux.
Enhanced existing automated solutions, such as the Inquiry Tool for automated Asset Department reporting and added new features, and fixed bugs.
Build all database mapping classes using Django models and Cassandra.
Embedded AJAX in UI to update small portions of the web page avoiding the need to reload the entire page.
Improved performance by using a more modularized approach and using more in-built methods.
Environment: Python, C++, Django, Puppet, Jenkins, Pandas, Grafana/Graphite, GCP, MySQL, No SQL, Linux, HTML, CSS, jQuery, JavaScript, Apache, Linux, Git, Eclipse, Pytest, Oracle, Cassandra DB.
Keywords: cprogramm cplusplus continuous integration continuous deployment artificial intelligence machine learning user interface business intelligence sthree database rlang information technology microsoft mississippi Michigan New York

To remove this resume please click here or send an email from [email protected] to [email protected] with subject as "delete" (without inverted commas)
[email protected];6550
Enter the captcha code and we will send and email at [email protected]
with a link to edit / delete this resume
Captcha Image: