Job Duties :
· 8-10 Years experience working in Data Engineering and Data Analysis.
· Hands on Experience in Hadoop Stack of Technologies ( Hadoop ,PySpark, HBase, Hive , Pig , Sqoop, Scala ,Flume, HDFS , Map Reduce).
· Hands on experience with Python & Kafka .
· Good understanding of Database concepts , Data Design , Data Modeling and ETL.
· Hands on in analyzing, designing, and coding ETL programs which involves Data pre-processing , Data Extraction , Data Ingestion , Data Quality ,Data Normalization & Data Loading.
· Working experience in delivering projects in Agile Methodology and hands on in Jira.
· Good understanding of Data Components , Data Processing & Data Analytics on AWS is good to have .
· Experience with data modeling tools like Erwin is good to have.
· Preferred Location : Cleveland or Pittsburgh.
· Master's/bachelor’s in computer science or equivalent fields