Job Description

Data Flow Expert
Job Number: 21-15650
Grab the opportunity to achieve your full potential! Eclaro is looking for a Data Flow Expert for our client in Charlotte, NC.
Eclaro’s client is one of the world's largest financial institutions, committed to providing the tools and services that bridge the gap between customers and their goals. If you’re up to the challenge, then take a chance at this rewarding opportunity!
Position Overview:
  • A top talent to design and build best-in-class Data Management and Integration Services capability over Infrastructure/ITSM data using Hadoop Architecture.
  • Innovate and transform the systems integration landscape for the client’s organization, while following industry best practices and providing capability maturity in support of Enterprise Data Management standards.   
  • Analyze current RDBMS Master Data Management platform including orchestrations, workflows, transformations, and help designing a scalable platform based on Hadoop for structured and semi-structure big data.
  • Reengineer traditional batch-based database system and stored procedures using Big Data services using expert level skills in Apache NiFi and Apache Kafka.
  • Develop and implement data processes using Spark Steaming including using Spark DataFrames, Spark SQL, Spark MLlib.
  • Deploy Apache HBase and apply its capabilities supporting OLTP applications.
Required Expereince:
  • The ideal to have solid experience in Data Warehousing and Master Data Management Design and Development.
  • Should have a strong understanding of data management concepts and applied DW/MDM development of DB-level routines and objects.
  • Should have experience in migrating traditional Relational Database Management System (RDBMS) to a Hadoop based architecture.
  • In addition, have hands on experience developing in many of the Apache Hadoop based tools. It involves hands-on development and support of integrations with multiple systems and ensuring accuracy and quality of data by implementing business and technical reconciliations.
  • Needs to be able to understand macro level requirements and convert them into actionable tasks to deliver a technically sound product.
  • Should be able to work in teams in a collaborative manner.
  • 10+ years of total IT experience
  • At least 5 years of experience developing for Data Warehousing, Data Marts, and/or Master Data Management
  • Deep experience on Hadoop including NiFi, Kafka, Flink, Spark, HBase, Hive and HDFS.
  • Solid experience implementing trained machine learning models using Spark Streaming and Apache Flink.
  • Programming experience Python, PySpark, Spark SQL
  • Exposure to Relational Database Management Systems using Oracle, DB2 or SQL Server
  • Possesses and demonstrates deep knowledge of the Hadoop Ecosystem
  • Experienced exposure to Hadoop ecosystem
  • Object oriented programming concepts
  • Expert SQL skills
  • Experience in SDLC and best practices for development
  • Ability to work against mid-level design documentation, take it to a low-level design, and deliver a solution that meets the success criteria
  • Knowledge of packaging and promotion practices for maintaining code in development, test, and production
  • Experience with Jira & Bitbucket
  • Data Scientist experience

If interested, you may contact:
Merly Villanueva 
Merly Villanueva | LinkedIn

Equal Opportunity Employer
: Eclaro values diversity and does not discriminate based on Race, Color, Religion, Sex, Sexual Orientation, Gender Identity, National Origin, Age, Genetic Information, Disability, Protected Veteran Status, or any other legally protected group status. 

Application Instructions

Please click on the link below to apply for this position. A new window will open and direct you to apply at our corporate careers page. We look forward to hearing from you!

Apply Online