Data Engineer
No. Of positions : Data Engineer : 05
Mandatory Requirements
Experience in AWS Glue
Experience in Apache Parquet
Proficient in AWS S3 and data lake
Knowledge of Snowflake
Understanding of file-based ingestion best practices.
Scripting language – Python & pyspark
CORE RESPONSIBILITIES
Create and manage cloud resources in AWS
Data ingestion from different data sources which exposes data using different technologies,
such as: RDBMS, REST HTTP API, flat files, Streams, and Time series data based on various
proprietary systems. Implement data ingestion and processing with the help of Big Data
technologies
Data processing/transformation using various technologies such as Spark and Cloud Services.
You will need to understand your part of business logic and implement it using the language
supported by the base data platform
Develop automated data quality check to make sure right data enters the platform and
verifying the results of the calculations
Develop an infrastructure to collect, transform, combine and publish/distribute customer
data.
Define process improvement opportunities to optimize data collection, insights and displays.
Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible
Identify and interpret trends and patterns from complex data sets
Construct a framework utilizing data visualization tools and techniques to present
consolidated analytical and actionable results to relevant stakeholders.
Key participant in regular Scrum ceremonies with the agile teams
Proficient at developing queries, writing reports and presenting findings
Mentor junior members and bring best industry practices
QUALIFICATIONS
5+ years’ experience as data engineer in consumer finance or equivalent industry
(consumer loans, collections, servicing, optional product, and insurance sales)
Strong background in math, statistics, computer science, data science or related discipline
Advanced knowledge one of language: Java, Scala, Python, C#
Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web
Services (AWS), Docker / Kubernetes, Snowflake
Proficient with
Data mining/programming tools (e.g. SAS, SQL, R, Python)
Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)
Data visualization (e.g. Tableau, Looker, MicroStrategy)
Comfortable learning about and deploying new technologies and tools.
Organizational skills and the ability to handle multiple projects and priorities simultaneously
and meet established deadlines.
Good written and oral communication skills and ability to present results to non-technical
audiences
Knowledge of business intelligence and analytical tools, technologies and techniques.
Familiarity and experience in the following is a plus:
AWS certification
Spark Streaming
Kafka Streaming / Kafka Connect
ELK Stack
Cassandra / MongoDB
CI/CD: Jenkins, GitLab, Jira, Confluence other related tools
Work Location : Ahmedabad , Hyderabad , Remote (preferably from Pune & Delhi)
CTC : Data Engineer : Upto 30.0 LPA
Interested candidates please call us on 9035007003 / 9845264304 /9035302302 or email your resume to : placements@beechi.in
Job Features
Job Category | IT |