Image Map

Untitled Document

(3 - 5 Years) "BigTapp" Hiring: Big Data Engineer On Sept 2018 @ Chennai



JOBS FOR FRESHERS
2015 FRESHER JOBS
2016 FRESHER JOBS
2017 FRESHER JOBS
2018 FRESHER JOBS
 
JOBS BY LOCATION
Bangalore Jobs
Chennai Jobs
Pune Jobs
Mumbai Jobs
Delhi Jobs
Kolkata Jobs
Noida Jobs
Gurgaon Jobs
OtherCity Jobs
 
JOBS BY CATEGORY
Walk-Ins
Freshers Jobs
IT Jobs
Bank Jobs
Govt.Jobs
BPO Jobs

Company:        BigTapp

Website:           www.bigtappanalytics.com

Eligibility:        BE / B.Tech / MCA / MSc

Experience:       3 - 5 yrs

Location:           Chennai

Job Role:         Big Data Engineer

JOB SUMMARY:

Company Profile:

BigTapp’s Big "InFo ActiV" platform enables creation of Business Analytics applications and deploy on Cloud to enable rapid realisation of business benefits. A suite of Big Data Business Analytics applications developed using the platform under the overarching theme of Customer Value Management enables BigTapp’s customers to get immediate business benefits.

Job Description:
1. Will be responsible for overall Product Development life cycle

2. Will be responsible to develop, maintain, test, and evaluate big data solutions

3. Analyse requirements, perform impact analysis, design and develop POC and products.

4. Building data processing systems with Hadoop and Hive using Java or Python

5. Perform coding to technical specifications, Document Users’ Guide and Systems Manual

6. Keep abreast of technological advancement, emerging standards and new software or hardware solutions that may affect decisions on systems building or enhancements

7. Any other duties that is specific to the project

Candidate Profile:
Education:
BE / B.Tech Graduates / MCA / or MSc (CS or IT) from reputed institutes.

Experience:
1. Minimum of 3+ years relevant software development work experience

2. At least 1 - 2 years Apache Spark experience.

3. 2+ years of experience designing and implementing data ingestion and transformation for big data platforms. (Spark, Sqoop, Kafka, Flume, Oozie, Zookeeper etc.)

Required Skills:
1. Production experience working with Hadoop implementations ( preferably multiple distributions like Cloudera / Hortonworks /AWS EMR etc )

2. Experience in designing, developing and implementing big data services and service-oriented architecture (SOA) solutions is a must

3. Proven track record designing highly parallelized data ingestion and transformation jobs in Spark including Spark Streaming.

4. Knowledge of Java & or Scala and how they are used in big data projects.

5. Ability to design the big data architecture based on the requirement

6. Exposure to / working on cloud environment (Amazon / Oracle / Google etc ) Ability to work on multiple tasks concurrently

7. Experience in cluster set-up, configuration and maintenance would be an added advantag.

 













Please Note: Durgajobs.com is not in any way responsible for any consequences of you applying for a job through the vacancies listed on this site. Please do thorough research before applying. We take utmost care in publishing jobs on this site.While every care has been taken to avoid mistakes and errors on the web pages, durgajobs.com shall not be liable to any person in any manner whatsoever by reason of any error or omission which might have crept in unintentionally.