Big Data Software Engineer
Eclaro is looking for a Big Data Software Engineer for our client in Research Triangle Park, NC.
Eclaro’s client is a major technology firm with a prominent presence in large and fast-growing markets, providing products and services that enable businesses and economies to thrive. If you’re up to the challenge, then take a chance at this rewarding opportunity!
- Act as subject matter expert and keep the team up-to-date on standards and practices, technology trends in the area of big data
- Stay current on big data trends and research various technologies as they become relevant.
- Develop solutions for data extraction, preparation, and loading of data from a variety of relational and non-relational sources into the Big Data environment.
- Build data and analytics tools that will offer deeper insight into the pipeline, allowing for critical discoveries surrounding key performance indicators
- Design, implement, test, deploy and support near real-time and batch data pipelines, using Hadoop infrastructure.
- Design scalable and maintainable solutions, using Big Data tools such as Apache Spark, Apache Kafka, and similar
- Adopt and enforce best practices related to data ingestion and extraction of data from the big data platform.
- Gather project requirements, meeting with project stakeholders and various operational and business teams
- Tune performance of Hadoop clusters, Kafka, Spark jobs, Hadoop MapReduce or similar
- Working with the BI, Operational systems teams, and EDW team to develop a modern dimensional representation of our transactional data.
- Creating the Snowflake database objects (tables, indexes, etc.) required to support the data model
- Work with the data warehouse, business intelligence and advanced analytics teams to evaluate their Big Data use cases and provide feedback and guidance
- Escalate support issues with internal teams and vendorsRequired Skills:
- Strong scripting experience with Python/Scala
- Understanding of best practices for building an Enterprise Data Lake
- Previous experience of container technologies such as Docker/ Kubernetes is desirable but not essential.
- Excellent written and verbal communication skills for technical writing and client presentations.
If hired, you will enjoy the following Eclaro Benefits:
- At least 5 years of experience with development process.
- At least 3 year (required) or 4 years (preferred) of experience in designing and leading the development of production-grade big data platforms including automated data acquisition, standardization, record matching, and processing using Hadoop, Spark, and Elasticsearch.
- At least 1 year (required) or 4 years (preferred) of hands-on experience developing applications using cloud-based platforms/micro-services.
- Experience in one or more ETL Tools with an understanding of best practices for building and designing ETL/ELT code
- Experience using Database Modeling Tools: Erwin or other tool
- Experience with GIT
- 401k Retirement Savings Plan administered by Merrill Lynch
- Commuter Check Pretax Commuter Benefits
- Eligibility to purchase Medical, Dental & Vision Insurance through Eclaro
Interested in applying?
Contact Jameese Halcon at Jameese.Halcon@eclaro.com now.
Equal Opportunity Employer: Eclaro values diversity and does not discriminate based on Race, Color, Religion, Sex, Sexual Orientation, Gender Identity, National Origin, Age, Genetic Information, Disability, Protected Veteran Status, or any other legally protected group status.