Image Map

Untitled Document

(3 - 6 Years) "Cuelogic" Hiring: Big Data Engineer On Sept 2018 @ Pune



JOBS FOR FRESHERS
2015 FRESHER JOBS
2016 FRESHER JOBS
2017 FRESHER JOBS
2018 FRESHER JOBS
 
JOBS BY LOCATION
Bangalore Jobs
Chennai Jobs
Pune Jobs
Mumbai Jobs
Delhi Jobs
Kolkata Jobs
Noida Jobs
Gurgaon Jobs
OtherCity Jobs
 
JOBS BY CATEGORY
Walk-Ins
Freshers Jobs
IT Jobs
Bank Jobs
Govt.Jobs
BPO Jobs

Company:     Cuelogic

Website:       www.cuelogic.com

Eligibility:       Any Graduate

Experience:     3 - 6 yrs

Location:         Pune

Job Role:       Big Data Engineer

JOB SUMMARY:

Company profile :

At Cuelogic we develop customized software, drive intelligence and make machines smarter. We partner with Fortune 500 Enterprises, SMEs and Leading Edge Startups across the globe and apply our Software Driven Thinking to solve their technical challenges.

Job Description:
1. Work with Apache Spark, HDFS, AWS EMR, Spark Streaming, GraphX, MlLib, Cassandra, Elasticsearch, Yarn, Hadoop, Hive, AWS Cloud services, SQL.

2. Be working with Machine learning / Deep learning libraries ( MlLib, Tensorflow, PyTorch) to implement solutions that solves or automates real world tasks like .. prediction, image processing, object detection, Natural language processing, anomaly detection, text to speech and many more.

3. Be building smart models that can be used in edge devices like IoT devices to perform edge computing and provide smart predictions locally.

4. Design, implement and automate deployment of distributed system for collecting and processing large data-sources.

5. Write ETL and ELT jobs and Spark/Hadoop jobs to perform computation on large scale datasets.

6. Design streaming applications using Apache Spark, Apache Kafka for real time computations.

7. Design complex data models and schemas for structured and semi structured datasets in SQL and NoSQL environments.

8. Deploy and test solutions on cloud platforms like Amazon EMR, Google Dataproc, Google Cloud Dataflow etc.

9. Explore and analyze data using various visualization tools like Tableau, Qlik etc.

Candidate Profile:
Experience:
Minimum 1 - 2 years of experience in Apache Spark.

Technical Skills:

1. Experience in working with Streaming Environments (Spark Streaming / Flink)

2. Experience in Hadoop ecosystem ( Hadoop MR,HDFS, Pig, SQOOP, Impala, Hive, Presto)

3. Good experience of using Spark and Hadoop frameworks on Amazon EMR.

4. Strong knowledge of data modelling and design principles in SQL and NoSQL environments.

5. Strong experience in working and building ELT and ETL pipelines and their components.

6. Experience or familiarity with visualisation tools like Tableau, Qlik or Grafana.

7. Strong experience in developing REST API and consuming data from external web API’s.

8. Comfortable with source control system (Github) and linux environments.

 













Please Note: Durgajobs.com is not in any way responsible for any consequences of you applying for a job through the vacancies listed on this site. Please do thorough research before applying. We take utmost care in publishing jobs on this site.While every care has been taken to avoid mistakes and errors on the web pages, durgajobs.com shall not be liable to any person in any manner whatsoever by reason of any error or omission which might have crept in unintentionally.