Home

kiranmai - senior informatica developer
[email protected]
Location: Wixom, Michigan, USA
Relocation:
Visa: opt
Resume file: kiranmai-ETL-Informatica-Developer_1767104598008.docx
Please check the file(s) for viruses. Files are checked manually and then made available for download.
PROFESSIONAL SUMMARY:
8+ years of IT experience in design, analysis, development, documentation, and implementation including Databases, Data Warehouse, ETL Design, Oracle, PL/SQL, SQL server databases, Informatica Power Center 9.x/8.x/7.x, Informatica Intelligent Data Management (IDMC) or Informatica Intelligent Cloud Services (IICS)
7years of strong experience in the areas of Data Integration, Data Warehousing, application integration.
4+ years of experience using Informatica cloud IICS services like Cloud Application Integration, Cloud Data Integration, Mass ingestion, API Manager, API Portal, Administrator, Monitor, Application Integration Console, Data Synchronization, Data Replication, etc.
Experience in Integrating Informatica cloud with different systems like Snowflake, GCP, Veeva CRM, Salesforce, and Third-party applications.
Knowledge of data governance, data quality, and master data management (MDM) tools.
Experience with Microsoft Business Intelligence tools, including SSRS and Power BI.
Experience in various RDBMS systems like Oracle (SQL, PL/SQL), Teradata, including DDL creation, views, complex SQL constructs, stored procedures, indexing, Partitioning techniques.
Skilled in using AWS SNS, Lambda, and Apigee to build real-time, event-driven integrations and manage secure, scalable APIs.
Experience in ingesting the data from AWS S3 files into Snowflake Cloud data warehouse.
Skilled in Unix Shell Scripting and have experience on different UNIX platforms.
Strong experience in developing the Test plan, Test scripts to perform data validation and Involved in System and Integration Testing.
Good working experience in Agile and Waterfall methodologies.
Proficient in designing and implementing complex ETL mappings utilizing a variety of transformations, including, Usable and Re-usable, User Defined Functions, etc.
Proficient in acquiring and integrating data from diverse sources, including Snowflake, Oracle, SQL Server, Excel, Access, and Adobe PDF, for use in Tableau data visualization projects.
Experience in creating detailed technical documents and diagrams using tools like Visio and Erwin, with strong proficiency in Microsoft Office Suite.
Specialized in data extraction, transformation, and loading (ETL) from a variety of databases including Teradata, Oracle, SQL Server, XML, Flat Files, COBOL and VSAM Files.
Experience in using Informatica Data Quality (IDQ) tools for Data Analysis / Data Profiling / IDQ Developer client, applying rules and developing mappings to move data from source to target systems.
Experienced in IBM MDM and PIM systems, Java, SQL, and J2EE, with skills in data modelling, SOA integration, and delivering scalable, secure solutions to improve data quality and business processes.
Used ServiceNow to align data flows with client needs, ensuring accurate data, consistent operations, and smooth system integration.
Actively participated in architecture, design, and code reviews, ensuring adherence to best practices and secure coding principles.
Improved ETL performance by using advanced tuning methods and splitting sessions for better efficiency.
Experienced Informatica Developer skilled in building ETL solutions, writing dynamic SQL, and integrating data using Informatica Cloud for public sector projects.
Experienced data engineer with in IICS, cloud-based data integration, and optimizing data flows into data warehouses using tools like Google Cloud, Big Query, and PostgreSQL.
Strong client-facing capabilities, adept at onsite-offshore co-ordination, with excellent communication and analytical skills.
Proficient in creating and scheduling T-SQL jobs to ensure daily execution and maintain operational efficiency.
Experienced in mapping techniques for Type 1, Type 2, and Type 3 Slowly Changing Dimensions
Extensive experience in SQL and PL/SQL, including the design and implementation of tables, database links, materialized views, synonyms, sequences, stored procedures, functions, packages, triggers, joints, unions, cursors, collections, and indexes in Oracle.
Proficient in Linux/UNIX environments with strong knowledge of shell scripting; experienced in using command-line utilities such as pmcmd to execute workflows in non-Windows settings.
Experienced in using Git and GitHub for version control.
Skilled in automating CI/CD pipelines with Terraform and Udeploy, and proficient in Shell Scripting, Python, SQL, and Data Looker.
Experienced ETL Developer with expertise in PySpark and AWS, building efficient data pipelines, ensuring data accuracy, and optimizing performance for seamless analytics.
Skilled in implementing CI/CD pipelines with Jenkins to automate the deployment, testing, and monitoring of ETL jobs.
I am proficient in Data governance, Data quality management, and Master Data Management (MDM) tools.
Designed and implemented ETL processes using Informatica IICS, PowerCenter, and Talend to integrate data from Oracle, SQL Server, and GCP Big Query.
Experienced in creating and improving Data integration workflows using Databricks and Informatica IICS.
Hands-on experience designing and developing scripts in Python for data processing and automation.
Analyze performance metrics to find and fix bottlenecks, using tools like Session Logs to improve job performance and efficiency.
Hands-on experience with Python scripting and frameworks like Pandas, Streamlet, and NumPy for data manipulation and automation.
Expert in designing and optimizing Tableau dashboards using complex calculations and statistical functions, ensuring scalability and performance.
Skilled at fixing support tickets, improving Tableau setup, and managing large datasets for useful insights
Active participant in Agile sprint teams, utilizing JIRA software for task tracking and collaboration.
Skilled in configuring and scheduling ETL jobs using Autosys and Control M to ensure the timely execution of data workflows.
Experienced in configuring ETL Informatica jobs in Tidal Scheduler with defined frequencies and conditions.
Proficient in software development methodologies, including Agile Scrum.
TECHNICAL SKILLS:
ETL IICS (Cloud), Informatica PC, ICM, Informatica BDM, product 360, Talend 7.3, Informatica Data Quality (IDQ), MDM, Informatica IDQ 10.2 HF1
Big Data Ecosystem HDFS, Python, PySpark, Hive, Scala
Cloud Services Amazon web services (AWS), Snowflake, Salesforce CRM, AWS (Lambda, SNS, S3), Snowflake, Apigee API Gateway, Microsoft Azure Data Factory
Versioning Tools GIT
Databases Oracle SQL, Microsoft SQL Server, TOAD DB2, Hive, Azure DB Snowflake, MySQL, PL-SQL, Veeva CRM
Programming Languages SQL, PLSQL, Oracle, Java (Servlets, JSP), Angular, Spring boot, Python
Operating system Linux, Unix, Windows
Project Management HP ALM, JIRA, Service Now, Azure Devops
Data Visualization Power BI, Tableau
Scheduling Tools Control-M, AutoSys, Tidal
Other tools Eclipse, IntelliJ

EDUCATION DETAILS:
Bachelor s degree in computer science from JNTUK in 2016.
Master's degree in University of Michigan Dearborn from AUG 2022 -APRIL 2024
PROFESSIONAL EXPERIENCE:
Client: Bank of America JAN 2024-TO-TILL
Role: Sr. Informatica IICS /IDMC Developer
Responsibilities:
Worked on data migration project for a banking client. The primary responsibility is to merge the reporting processes from multiple lines of business of bank into one unified system to improve the bank s ability to respond and support Anti-Money Laundering requirements.
Designed and developed data models, including conceptual, logical, and physical models, to align with business objectives and system requirements.
Collected and integrated data from various database systems by creating PL/SQL scripts to extract, transform, and load (ETL) data for business applications.
Developed and executed SQL queries to transform and cleanse the data, aligning it with business logic and reporting needs.
Developed Dynamic SQL queries to handle complex data transformations, enabling flexibility and scalability in data processing tasks.
Designed and implemented a scalable Data Warehouse solution using Google Cloud data engineering tools (Big Query, Cloud Storage, Dataflow, Pub/Sub) to streamline data processing, storage, and analytics.
Designed and developed real-time integrations using Informatica CAI to facilitate data exchange between public sector systems.
Validated all developed artifacts against test plans, performing comprehensive functional and performance testing to ensure the high quality and accuracy of the deliverables before deployment.
Developed ETL batch and real-time integration data pipelines using IICS CAI, CDI Services, and Informatica PowerCenter (10.5).
Designed and built reusable ETL components in Informatica IICS to integrate data from multiple sources into the target SaaS solution using real-time and batch pipelines.
Worked closely with cross-functional teams, including SaaS vendors, business users, and IT teams, to align integration efforts with organizational objectives.
Automated deployment and infrastructure setup for ETL processes using Terraform and Udeploy, making deployments faster and more efficient.
Created IICS components including mappings, mapplets, mapping tasks, task flows, business services, data replication, data synchronization tasks, file listeners, and hierarchical schemas.
Performed extensive migrations of mappings, worklets, and workflows across different folders and repositories.
Used Informatica IICS/IDMC to automate commission tracking, reducing manual effort and ensuring accurate payouts for different compensation plans.
Designed IICS CAI service connectors, custom connections, application connections, process objects, and processes.
Converted complex SQL queries into PySpark Data Frame APIs and RDDs, improving data processing speed and efficiency.
Created batch and real-time ETL pipelines using IICS CDI, CAI, and PySpark to process different types of data like JSON, Avro.
Developed and maintained ETL mappings, workflows, and tasks in IICS to integrate, cleanse, and transform data from diverse sources.
Designed and implemented complex data pipelines for seamless cloud integration and data transformation using IICS.
Developed efficient data pipelines in Databricks and integrated them with Informatica IICS to process and transform data from Oracle, SQL Server, AWS S3, and Snowflake.
Managed data lakes in Azure and AWS with Databricks, ensuring efficient storage, processing, and data accuracy using error-handling mechanisms.
Developed PySpark scripts for efficient data extraction, transformation, and loading (ETL), ensuring high performance and scalability.
Used AWS Glue to manage ETL workflows and integrated PySpark with AWS S3, Athena, and Redshift for seamless data processing.
Managed the migration of ETL processes from Informatica PowerCenter Created ETL workflows to automate regulatory reports, ensuring accurate and timely compliance with rules like Anti-Money Laundering (AML) and Fraud Detection.
Monitored and validated client financial data using ETL processes to track transactions and identify unusual patterns.
Created and managed data integration tasks, including synchronization and replication, to streamline cloud-based data processing.
to Informatica Cloud (IDMC), ensured a smooth transition with minimal impact on data workflows.
Updated existing PowerCenter mappings and workflows to work with IDMC, optimizing them for cloud-based integration and taking advantage of cloud features for better performance.
Built reusable components such as task flows, templates, and parameterized mappings to standardize processes and improve efficiency.
Developed a cloud-based integration solution using AWS SNS, Lambda, and Apigee, enabling real-time, event-driven data processing and seamless API management.
Managed and secured API traffic with Apigee API Gateway, optimizing performance and scalability while maintaining secure, monitored API interactions.
Utilized Swagger files, business services, web services, and REST v2 to integrate with third-party applications.
Implemented software in all environment s API & ETL tools (Informatica PowerCenter 10.2 HF1, Informatica IDQ 10.2 HF1, MuleSoft, SAP Data Services, Microsoft Azure Data Factory).
Extensive experience in using Git and GitHub for version control, ensuring proper code management and tracking changes in ETL projects.
Implemented CI/CD pipelines using Jenkins for automating ETL job deployment, testing, and monitoring.
Created ETL pipelines using Informatica PowerCenter and IICS to prepare, integrate, and process data for analytics, reporting, and machine learning.
Managed branch workflows, pull requests, and code reviews in GitHub for collaboration with team members.
Created mappings based on business requirements to implement incremental fixes and support data integration tasks.
Used Informatica Data Explorer (IDE) and Informatica Data Quality (IDQ) to profile different data sources, create scorecards, set up and validate rules, and provide data for business analysts to create the rules.
Created Informatica workflows and IDQ mappings for Batch and Real Time.
Created and managed ETL pipelines using Informatica IICS, PowerCenter, and DataStage to collect and process data from Oracle, SQL Server, AWS S3, Snowflake, and Big Query.
Extracted data from various heterogeneous sources like DB2, Sales force, Mainframes, Teradata, and Flat Files using Informatica Power center and loaded data in target database.
Created and Configured Landing Tables, Staging Tables, Base Objects, Foreign key relationships, Queries, Query Groups in MDM.
Created ETL workflows to monitor client activities, ensuring compliance with regulations like Anti-Money Laundering (AML).
Created IICS data replication tasks to replicate Veeva CRM and Salesforce objects into SQL Server tables.
Implemented software in all environment s API & ETL tools (Informatica PowerCenter 10.2 HF1, Informatica IDQ 10.2 HF1, MuleSoft, SAP Data Services, Microsoft Azure Data Factory).
I utilized Pandas to enhance data preparation and validation processes as part of ETL workflows.
Cleansing data by addressing missing values, eliminating duplicates, and ensuring consistent formatting to meet the required standards for integration.
Validating data consistency by implementing checks for date ranges, numeric values, and reference integrity to ensure high data quality and compliance with business rules.
Mapping according to the Business requirements for the Incremental Fixes Developer, and Mapping Designer.
Monitor job failures, find out what went wrong, and fix issues to ensure smooth data processing in Informatica IICS, PowerCenter, and Teradata.
Set up and managed Informatica jobs using JAMS scheduler, automating data workflows to improve scheduling efficiency and reduce the need for manual work.
Created and scheduled Apache Airflow (Astronomer) workflows to manage ETL pipelines and automate job tasks.
Configured and scheduled ETL jobs using Autosys and Control M to ensure timely execution of data workflows.
Monitored job schedules, ensuring data pipelines ran smoothly and promptly within SLA requirements.
Created Python scripts to automate data tasks, improve data quality checks, and support ETL processes, making the workflow more efficient.
Responsible for Unit and System Testing to ensure data integrity and performance.
Conducted performance tuning in both IICS and SQL queries.
Participated in deployment activities, including change creation and tag creation in GitHub.
Worked with UAT testers early in the project to check ETL processes, find issues, and make sure data was accurate before going live.
Monitored and Supported ETL processes 24/7, ensuring Informatica workflows ran without issues.
Identified, analysed, and resolved job failures in Informatica workflows, leveraging debugging skills to fix mapping and transformation component issues.
Successfully upgraded PowerCenter Workflow Manager and its related components during a PWC version update. Tested and validated all workflows to ensure smooth operation with minimal downtime.
Used UNIX command-line operations to investigate and resolve technical issues within ETL processes.
Performed proactive health checks to identify bottlenecks and ensure data accuracy and consistency in workflows.
Tracked Databricks jobs with cluster logs and set up alerts to fix issues quickly.
Fixed issues in Tableau reports and resolved user support tickets by identifying and correcting workbook errors, ensuring they worked smoothly.
Collaborated with different teams to create and implement data integration solutions, ensuring smooth data transfer from various systems like ERP, CRM, and cloud platforms.
Independently designed ETL solutions, communicated effectively with teams, and quickly learned new technologies like IICS CAI, AWS SNS, Lambda, and Apigee to build innovative integrations.
Checked logs and error messages when jobs failed to find out the causes, such as connection problems with SQL Server, Oracle, or AWS.
Used Informatica Data Quality (IDQ) to keep data quality high by setting up automated checks and alerts for any data issues.
Environment: IICS (CDI, CAI), Power Center 10.5, Informatica IDQ, Oracle, MDM, IDE, Linux, Jira, Git hub, Cron, Microsoft Visual studio, WINSCP, AWS (Lambda, SNS, S3), Snowflake, Apigee API Gateway, Microsoft Azure Data Factory.
Client: CVS Health May 2023 - Dec 2023
Role: Informatica IICS Developer
Responsibilities:
Gathered business requirements and participated in technical review meetings to gain a thorough understanding of the data warehouse model.
Created ETL jobs using Informatica Intelligent Cloud Services (IICS) and Informatica PowerCenter (10.4).
Designed reusable mappings in IICS for enhanced efficiency.
Utilized IICS services such as Data Integration, Data Synchronization, administration, and monitoring.
Constructed IICS mappings and mapping tasks to extract data from various sources, including Salesforce, Oracle, flat files, and SQL Server.
Extensively used Informatica Data Explorer (IDE)&Informatica Data Quality (IDQ) profiling capabilities to profile various sources, generate score cards, create and validate rules and provided data for business analysts for creating the rules.
Integrated Healthcare data from multiple systems, such as Electronic Health Records (EHR), claims data, patient management systems, and prescription data.
Ensured compliance with data governance policies, data security standards, and privacy regulations, such as HIPAA, across all data integration activities.
Cleansed and validated healthcare data from various sources, ensuring the integrity and accuracy of patient demographics, diagnosis codes, medication details, and treatment plans.
Followed best practices for data governance and security to ensure patient data was managed and processed according to HIPAA rules.
Used Streamlet to Develop an interactive Dashboard for stakeholders to explore patient data, claims processing, or clinical trial results after ETL transformation.
Displayed patient demographics, diagnosis trends, and treatment patterns by building ETL pipelines in Informatica to integrate and transform data from various healthcare systems into a centralized data warehouse.
Automated real-time processing with AWS Lambda, reducing operational costs by deploying scalable, serverless functions for efficient data handling.
Optimized and secured API traffic using Apigee API Gateway, enhancing performance, scalability, and security while ensuring seamless, monitored interactions.
Implemented pushdown optimization techniques to improve performance.
Used Shell Scripting to automate ETL workflows, making data extraction, transformation, and loading faster and reducing manual work.
Built and managed data pipelines for GCP PostgreSQL to ensure easy data loading, processing, and retrieval for reports and analysis.
Set up CI/CD pipelines with Terraform and UDeploy to automate ETL deployments and minimize errors.
Created ETL workflows using the HL7 Library on Linux to handle real-time and batch healthcare data processing while following regulatory standards.
Responsible for troubleshooting ETL job failures and ensuring smooth execution.
Created Informatica workflows with PowerCenter, BDM, and IICS to transfer data between Oracle databases, Salesforce, Cloudera Data Lake, and other systems.
Used IDQ and ICRT to ensure high data quality and manage real-time updates.
Applied data governance best practices with tools like Axon to keep data organized and reliable.
Managed Troubleshot job failures, analysed logs, and identified root causes, implementing corrective actions.
Worked closely with system administrators to integrate ETL job scheduling with enterprise scheduling systems.
Managed ETL control tables to handle batch processes for incremental and CDC data.
Engaged in data modelling, including the creation of database tables and views.
Analyzed logs to identify and fix slow processes, using tuning techniques to improve performance.
Utilized various components such as assignment, service, subprocess, and jump in IICS workflows.
Implemented event-based subscriptions for IICS jobs to enhance automation.
Managed the migration of ETL processes from Informatica PowerCenter to Informatica Cloud (IDMC), ensuring seamless transition and minimal disruption to data workflows.
Re-engineered existing PowerCenter mappings and workflows for compatibility with IDMC, optimizing them for cloud-based data integration and leveraging advanced cloud features for enhanced performance.
Created and managed job dependencies between different ETL processes using Autosys and Control M.
Configured ETL Informatica jobs in Tidal Scheduler with defined frequencies and conditions
Working knowledge of software development methodologies like Agile Scrum experience
Established IICS connections to transfer data from SFTP servers to local networks.

Environment: IICS CDI, Informatica Power center 10.4, Oracle, MS SQL Server, Salesforce, UNIX, Jira, IDQ, MDM Toad, Informatica IDQ 10.2HF1, AWS (Lambda, SNS, S3), Snowflake, Apigee API Gateway, Microsoft Azure Data Factory.
Client: Pier Soft Technologies, Hyderabad, India Oct 2020 July 2022
Role: Sr. Informatica Developer
Responsibilities:
Involved in gathering and analysing business requirements, writing requirement specification documents, and identifying data sources and targets.
Wrote and managed UNIX commands for system operations and scripting tasks.
Conducted data cleansing and standardization using cleanse functions in Informatica IICS.
Designed and developed ETL workflows to extract, transform, and load financial data from multiple sources, including databases, flat files, and APIs, into data warehouses and reporting systems.
Built data pipelines to integrate financial data from various systems, such as banking applications, financial software, and third-party APIs.
Worked on data integration to bring together data from multiple finance systems, like banking, accounting, and payment systems, into one central repository.
Developed complex transformations for financial data to ensure it met business needs for accurate reporting and analysis.
Implemented data quality checks, handling missing values, duplicates.
Integrated ETL processes with financial applications like Oracle Financials, SAP, and other ERP or accounting systems to extract and load transactional data.
I took part in data migration projects to transfer data from old financial systems to new platforms.
Helped create data feeds for business tools like Power BI and Tableau, used for financial reports.
Created complex mappings in Informatica PowerCenter, utilizing various transformations, mapping parameters, mapping variables, mapplets, and parameter files in Mapping Designer.
Imported standardized mappings into Informatica Designer as mapplets.
Used Informatica IDQ to profile data and identify duplicate records for removal.
Used SQL to reduce the amount of data transferred across the network whenever possible.
Expert in data extraction, transformation, and loading (ETL) from sources including Teradata, Oracle, SQL Server, XML, flat files, COBOL, and VSAM XML files.
Utilized advanced SQL queries and PL/SQL programming to design and maintain database packages, stored procedures, functions, and triggers for efficient data processing and automation.
Designed and scheduled ETL/ELT processes for periodic automated updates, including FTP/SFTP and SCP file transfers.
Supported and validated quality by creating detailed test cases for unit and integration testing at all stages.
Created, deployed, and scheduled jobs in Tidal Scheduler for integration, user acceptance testing, and production environments.
Supported cross-functional development activities with integration technologies such as Microsoft .NET/C#, SQL Server, Visual Studio, and DevOps.
Designed workflows with multiple sessions, incorporating decision, assignment, event wait, and event raise tasks, and utilized Informatica Scheduler to schedule jobs.
Configured and scheduled ETL jobs using Autosys and Control M to ensure timely execution of data workflows.
Optimized job schedules to minimize system downtime and improve processing times.
Implemented notification systems for failed or delayed jobs, ensuring rapid responses to issues.
Performed regular checks and audits to ensure proper job executions and updates to the scheduling system.
Utilized Teradata Fast Load utilities to efficiently load data into tables.
Utilized SQL tools like TOAD to execute queries and validate data integrity.
Developed UNIX shell scripts for Informatica pre-session, post-session, and Autosys scheduling of jobs (workflows).
Conducted tuning of queries, targets, sources, mappings, and sessions to optimize performance.
Used Linux scripts and detailed test plans to ensure data loading processes ran successfully.
Worked with the Quality Assurance team to develop test cases for unit, integration, functional, and performance testing.
Shared knowledge with end users and documented the design, development, implementation, daily loads, and mapping process flow in detail.
Environment: Informatica Power Center 10.1, UNIX, SQL, Shell, PL/SQL, Netezza, Teradata, Collibra, Microsoft SQL Server.
Client: L Cube Innovative Solutions, Chennai, India Aug 2018 Sep 2020
Role: ETL/Informatica Developer
Responsibilities:
I was engaged in gathering business requirements and participated in technical review meetings to gain a thorough understanding of the data warehouse model.
Designed the dimensional model and data loading process utilizing Slowly Changing Dimension (SCD) Type 2 for quarterly membership reporting.
Develop ETL jobs using Informatica Power Center (9.5), and Informatica Power exchange.
Designed and developed ETL workflows to extract, transform, and load data from diverse sources such as product catalogs, customer databases, and transaction logs into the data warehouse.
Worked on Data map s creation in Informatica power exchange for the sources like VSAM files, IMS database and DB2 fixed width files
Designed and implemented ETL workflows to aggregate product data from multiple vendor systems into a unified database, ensuring accuracy, consistency, and real-time updates for inventory and pricing information.
Developed mappings in Informatica to load the data from various sources into the Data Warehouse, using different transformations like Source Qualifier, Look up, Aggregator, Stored Procedure, Update Strategy, Joiner, Filter.
Used Informatica PowerCenter/IICS to handle high-volume data pipelines for efficient data integration and processing.
Designed and executed data cleansing and deduplication processes to maintain the accuracy, consistency, and integrity of customer data.
Implemented email tasks for success and failure notifications to streamline communication.
Utilized decision tasks to manage and execute different workflows within the same process.
Provided support to team members with various Informatica-related inquiries and tasks.
Supported ETL processes by identifying and fixing issues in data pipelines to reduce downtime and smooth operations.
Monitored scheduled jobs and analyzed performance to maintain system efficiency, especially during busy periods.
Used push-down optimization and session partitioning to speed up complex data transformations.
Implemented IICS jobs to run as per event-based subscriptions.
Worked closely with the Scrum Master to address bottlenecks in the ETL pipeline, ensuring timely delivery of data solutions within the sprint cycle.
Used JIRA dashboards to report ETL job status, issues, and progress to both internal teams and external stakeholders, providing transparency and clear communication.
Implemented IICS connections to perform data from SFTP servers and download into local network.
Environment: Informatica Power Centre 9.0.1, Veeva CRM, Jira, Unix, UC4, Teradata, Amazon S3.

Client: Niche Bees Techno Solutions, Hyderabad, India June 2016 July 2018
Role: Informatica Developer
Responsibilities:
Created and managed ETL workflows to transfer data between the ERP system and other applications, such as CRM, financial, and HR systems.
Created mappings to extract, cleanse, and load data from multiple sources like Oracle, SQL Server, and flat files into ERP databases.
Developed and implemented ETL workflows using Informatica PowerCenter.
Created mappings, workflows, mapping specifications, rule specification, mapplets, rules, reference data.
Designed and built Dimensional and Physical Data Model with a clear understanding of best practices.
Created dimensions and facts in physical data model using ERWIN tool.
Extracted the data from Teradata, SQL Server, Oracle, Files, and Access into Data warehouse.
Designed Fact and Dimension tables for a star schema architecture to facilitate the development of the data warehouse.
Optimized ETL workflows to improve performance and speed up the processing of large ERP datasets.
Experience with Informatica PowerCenter and SAP BODS, coupled with proficiency in PL/SQL.
Designed, developed, tested, and implemented data pipelines using Informatica PowerCenter, SAP BODS, and SQL Server technologies.
Assisted in monitoring ETL jobs to ensure they were completed on time and helped identify and resolve any errors or issues during the data loading process.
Helped in handling sensitive ERP data, such as financial transactions and employee records, following secure data handling practices to ensure data privacy and compliance.
Documented ETL designs, workflows, and operational procedures for future reference.
Utilized Informatica Designer to create complex mappings employing various transformations for efficient data movement to the data warehouse.
Designed and implemented intricate source-to-target mappings using a range of transformations, including Aggregator, Lookup, Joiner, Source Qualifier, Expression, Sequence Generator, and Router.
Mapped client processes, databases, and reporting software to HPE s XIX X12 processing systems, leveraging technologies such as BizTalk, Visual Studio, Oracle SQL, MS SQL, C#, .NET, WSDL, SOAP, REST, API, XML, and XSLT.
Environment: Informatica Power Center 8.6, UNIX, Oracle
Keywords: csharp continuous integration continuous deployment access management business intelligence sthree database information technology hewlett packard microsoft mississippi procedural language Colorado

To remove this resume please click here or send an email from [email protected] to [email protected] with subject as "delete" (without inverted commas)
[email protected];6574
Enter the captcha code and we will send and email at [email protected]
with a link to edit / delete this resume
Captcha Image: