Posts Tagged “Spark”

Spark & Kafka Administrator in Chicago,IL (Full Time)

Please share resume to

Role: Spark & Kafka Administrator

Location: Chicago,IL

No. of Position: 2

Experience: 8 years

Technical / Functional Skills:

Strong administration experience in IBM Websphere,Spark, Kafka, Hadoop,Yarn,apache Mesos and Tibco.
Good  Knowledge in Spark & Hadoop
Good Knowledge UNIX shell scripting.

Good communication skills
Good analytical and problem solving skills

Experience Required:
Strong administrative experience in Spark & Kafka, HADOOP, YARN , Tibco

Operating System – Windows and UNIX/Linux

Roles and responsibility:

  1. Accountable for all the administrative activities for all the enviroments(Dev, QA, production ) in configuring and maintaining the Spark,Kafka & Hadoop.
  2. Should handle all the requests from the development. team for deployment

4.. Coordinate technically with the Application,Database, Middleware and QA support teams.

Read more »

Data Architect in Atlanta,GA (Full time)

Please share resume to

Role: Data Architect.

Location: Atlanta,GA

Experience: 10 Years

Experience Required:
• Experience in Hortonworks Hadoop Platform
• Knowledge in Airline Domain

Roles and responsibility:

• 10-15 years of working experience with 3+ years of experience as Big Data solutions architect. Needs to have experience with the major big data solutions like Hadoop, MapReduce, Hive, HBase, MongoDB, Cassandra, Spark, .Impala, Oozie, , Flume, ZooKeeper, Sqoop, Kafka, Nifi, etc, NoSQL databases.
• Big Data Solution Architect Certified Preferred. Hands-on experience on Hadoop implementations preferred
• Big Data Certification is a must.
• Work experience on various distributions such as Cloudera, Hortonworks, MapR etc.
• Work experience on both Real Time Streaming and Batch processing.
• Translate complex functional and technical requirements into detailed design.
• Propose best practices/standards with data security and privacy handling experience.
• Knowledge in handling different kinds of source systems and different formats of data.
• Hands-on experience with Hadoop applications (e.g. administration, configuration management, monitoring, debugging, and performance tuning).
• Strong knowledge of major programming/scripting languages like Java, Linux, R, Scala etc. As well as have experience in working with ETL tools such as Informatica, Talend and/or Pentaho.
• Experience in designing multiple data lake solutions with a good understanding of cluster and parallel architecture.
• Experience Cloud Computing.
• To be able to benchmark systems, analyze system bottlenecks and propose solutions to eliminate them;
• To be able to clearly articulate pros and cons of various technologies and platforms;
• To have excellent written and verbal communication skills;
• To be able to perform detailed analysis of business problems and technical environments and use this in designing the solution;
• To be able to work in a fast-paced agile development environment

Read more »

Bigdata Engineer in Scottsdale, AZ (Full Time)

Please share resume to

Role: Bigdata Engineer

Location: Scottsdale, AZ

Job Description: 

  • Strong Java Background with 6 – 7 years of experience – Strong experience in HDFS, Hive, Pig, Java MapR – Hbase, Spark, Sqoop – Shell Scripting, Unix – Experience in SQL – Good analytical skill.

Read more »

Hadoop, Python, ETL Developer in San Antonio, TX (Full Time)

Please send resumes to

Role: Hadoop, Python, ETL Developer

Location: San Antonio, TX

Total Experience: 4+ Years

Technical Skill: Primary – Hadoop, Python,

Experience Required: 4+ Years

Required Skills:3+ Yrs Relevant IT software experience (Technical) in Haddop development Experience with databases like Netezza, Oracle, MS SQL Server 2012+, DB2 and MS Access
And NOSQL dbs
Experience with job automation & scheduling software (ControlM) Strong ability to write SQL queriesDesired Skills:Familiar with UNIX, Windows, File transfer utilities, Visio, process flow creation, ETL technologies

Key Skill: Hadoop, Python, Spark


Read more »

Big Data Engineer in Livermore, CA

Please send resume to with rate and contact details

Big Data Engineer
Livermore, CA
Long Term
Rate: Open DOE

Job Description:
· 7 + years of experience in
o Big data stack – Hadoop, Spark, NoSQL. Strong fundamental knowledge of internals! No superficial concepts.
o Strong programing skills in python and SQL.
o Experience building data governance and security using core hortonworks tools like Knox, Ranger, Kerberos
o Performance tuning for Hadoop, spark.
o Understanding of containerization framework – docker, Mesos
o Basic Understanding of data science concepts
o Experience building big data stack in cloud including ha/DR/data synch strategies using Kafka/NiFi
o Ability to support Infrastructure engineer for infrastructure automation of big data stack
o Strong familiarity with Hortonworks HDP stack even if there is not a working experience on HDP as long as engineer has equivalent open stack experience

Read more »

Java Full Stack Developer in Boston,MA (Full Time)

Please send resumes to

Job Title: Java Full Stack Developer

Location: Boston, MA

Job ID: DMS_635115

Skill: Bigdata

Job Description: • Full stack application development in JEE (MUST) • Big Data expertise – Hadoop, Hive, Spark, Oozie, NoSQL (HBase/Cassandra), etc. • JavaScript, HTML5, ReactJS and NodeJS expertise • Design Pattern usage skills • Webservice (RESTful) expertise • Expertise in Database (Oracle database, ERD, Normalization, performance tuning, SQL, PL/SQL) • OLTP and OLAP skills • Designing using UML • ability to lead development (Agile/non-Agile Iterative development) • Expertise in Agile development • Challenge taker and independent thinker

Read more »

Principal Bigdata/Hadoop Engineer in Mountain View, CA (Full Time)

Please share resume


  • Help set a broad vision for the Opower Data Platform. You will need to understand how all of the pieces of the platform work together to deliver a cohesive product suite.
  • Be a hands-on engineer. We expect our top engineers to contribute to the code base, both working on key components themselves and participating in code reviews with others.
  • Mentor other engineers on the team about best practices in a modern data architecture.
  • Work across Oracle Utilities to understand the needs of other development teams and adjust our architecture and roadmap accordingly.

About You

  • At least 7-10 years of professional experience as an engineer, with substantial Java and/or Python experience
  • Expert in Apache HBase and Apache Zookeeper
  • Strong working and theoretical knowledge of MySQL and/or Oracle Database, including common optimizations (analysis of query plans, indexing, etc.) and data warehouse modeling techniques
  • Experienced with batch processing applications using technologies such as Hive UDF’s, Oozie workflows, and Spark applications
  • Strong background in other distributed applications such as Apache Spark preferred
  • Hands-on Hadoop administrative experience preferred

Read more »

BigData Architect in Phoenix, AZ (Full Time)

Please share resume

Job Title: BigData Architect

Location: Phoenix

Skill: Bigdata

Job Description: •Perfect in Big-Data & Hadoop concepts •Should have idea of Big Data tools Pig, Hive •Should have command on SQL •Should have command on Unix shell Script •Should be good in Java Map Reduce •Should have knowledge and exposure to NoSQL databases •Should have experience in optimizing Hadoop ETL jobs •Good to have knowledge in SPARK, Scala. •Good to have Knowledge in Banking Domain and risk Management system. •Excellent communication, analytical and inter-personal skills

Read more »

Data Architect (Senior Consultant) in Memphis (Full Time)

Please share resume

Job ID: 602309

Title: Data Architect (Senior Consultant)

Location: Memphis

Job Description: • Experience on loading various types of data such as JSON,CSV into Elastic Search • Experience on simple load and Bulk load in Elastic Search • Experience in Logstash. • Experience on writing Elastic Search queries in Java/Python. • Experience in setting up ELK Stack • Experience in Bigdata such as HDFS, Map Reduce , Spark and Hive is highly desirable • Work with the distributed analytics team to format and ingest analytic output into Elastic Search • Design and implement second level analytics using Elastic Stack • Provide recommendations and design optimal configurations for large-scale deployments • Perform Elastic search performance and configuration tuning • Collaborate with dev team, develop and optimize Kibana visualizations.

Read more »

Vertica (Technical Lead) in Collierville (Full Time)

Please share resume

Job ID: 611898

Vertica (Technical Lead)

Location: Collierville

Job Description: • Total experience minimum 7 years, Overall minimum 3 years of experience in Hadoop • Thorough understanding of Hadoop Cloudera/HortonWorks/MapR and ecosystem components • Thorough understanding of No SQL databases like HBase, Mongo, Cassandra etc • Requirements Gathering, designing and developing scalable big data solutions with Hadoop • Strong technical skills on Spark, HBase, Hive, Sqoop, Oozie, Flume, Java, Pig, Python etc • Good experience with distributed systems, large scale non-relational data stores, map-reduce systems, performance tuning, and multi-terabyte data warehouses • Good experience with Unix shell scripting & Search engines like Solr / ElasticSearch • Practical experience of implementation using M-R framework, databases and SQL • Exposure to Hadoop capacity planning • Strong analytical & problem solving skills; proven teamwork and communication skills • Must show initiative and desire to learn business • Able to work independently and mentor team members • Desirable: Hands-on experience with analytical tools, languages, or libraries (e.g. R) • Desirable: Experience of Vertica/GreenPlum/Netezza/Teradata appliances

Read more »