Posts Tagged “Hadoop”

Spark & Kafka Administrator in Chicago,IL (Full Time)

Please share resume to

Role: Spark & Kafka Administrator

Location: Chicago,IL

No. of Position: 2

Experience: 8 years

Technical / Functional Skills:

Strong administration experience in IBM Websphere,Spark, Kafka, Hadoop,Yarn,apache Mesos and Tibco.
Good  Knowledge in Spark & Hadoop
Good Knowledge UNIX shell scripting.

Good communication skills
Good analytical and problem solving skills

Experience Required:
Strong administrative experience in Spark & Kafka, HADOOP, YARN , Tibco

Operating System – Windows and UNIX/Linux

Roles and responsibility:

  1. Accountable for all the administrative activities for all the enviroments(Dev, QA, production ) in configuring and maintaining the Spark,Kafka & Hadoop.
  2. Should handle all the requests from the development. team for deployment

4.. Coordinate technically with the Application,Database, Middleware and QA support teams.

Read more »

Data Architect in Atlanta,GA (Full time)

Please share resume to

Role: Data Architect.

Location: Atlanta,GA

Experience: 10 Years

Experience Required:
• Experience in Hortonworks Hadoop Platform
• Knowledge in Airline Domain

Roles and responsibility:

• 10-15 years of working experience with 3+ years of experience as Big Data solutions architect. Needs to have experience with the major big data solutions like Hadoop, MapReduce, Hive, HBase, MongoDB, Cassandra, Spark, .Impala, Oozie, , Flume, ZooKeeper, Sqoop, Kafka, Nifi, etc, NoSQL databases.
• Big Data Solution Architect Certified Preferred. Hands-on experience on Hadoop implementations preferred
• Big Data Certification is a must.
• Work experience on various distributions such as Cloudera, Hortonworks, MapR etc.
• Work experience on both Real Time Streaming and Batch processing.
• Translate complex functional and technical requirements into detailed design.
• Propose best practices/standards with data security and privacy handling experience.
• Knowledge in handling different kinds of source systems and different formats of data.
• Hands-on experience with Hadoop applications (e.g. administration, configuration management, monitoring, debugging, and performance tuning).
• Strong knowledge of major programming/scripting languages like Java, Linux, R, Scala etc. As well as have experience in working with ETL tools such as Informatica, Talend and/or Pentaho.
• Experience in designing multiple data lake solutions with a good understanding of cluster and parallel architecture.
• Experience Cloud Computing.
• To be able to benchmark systems, analyze system bottlenecks and propose solutions to eliminate them;
• To be able to clearly articulate pros and cons of various technologies and platforms;
• To have excellent written and verbal communication skills;
• To be able to perform detailed analysis of business problems and technical environments and use this in designing the solution;
• To be able to work in a fast-paced agile development environment

Read more »

Java/Python/Hadoop Developer in Atlanta, GA (Full Time)

Please share resume to

Role: Java/Python/Hadoop Developer

Location: Atlanta, GA

No. of Positions: 2

Job Description:

–          Python backend scripting – 4+ years
–          Core Java – 6+ years
–          Hadoop – 4+ years
–          Unix Scripting – 4+ years
–          Very strong SQL skills
–          Hands on development experience on all skills is a must

Read more »

Hadoop, Python, ETL Developer in San Antonio, TX (Full Time)

Please send resumes to

Role: Hadoop, Python, ETL Developer

Location: San Antonio, TX

Total Experience: 4+ Years

Technical Skill: Primary – Hadoop, Python,

Experience Required: 4+ Years

Required Skills:3+ Yrs Relevant IT software experience (Technical) in Haddop development Experience with databases like Netezza, Oracle, MS SQL Server 2012+, DB2 and MS Access
And NOSQL dbs
Experience with job automation & scheduling software (ControlM) Strong ability to write SQL queriesDesired Skills:Familiar with UNIX, Windows, File transfer utilities, Visio, process flow creation, ETL technologies

Key Skill: Hadoop, Python, Spark


Read more »

Big Data Engineer-Boston, MA (Full Time)

If you are available and interested for this position please send resume to

Title: Big Data Engineer

Location: Boston, MA

Type of Hire: Fulltime


  • “Provide engineering and design support for Hadoop build, configuration, monitoring and supportability
  • Manage the data loading into Hadoop, Hive and various column store databases
  • Translate complex functional and technical requirements into detailed big data design solutions conformant to enterprise standards, architecture and technologies
  • Maintain security and data privacy
  • Perform analysis of vast data stores and uncover insights
  • Participate in POC efforts to help build new Hadoop clusters
  • Test prototypes and oversee handover to operational teams
  • Monitor and tune the performance of the Hadoop ecosystem
  • Monitor databases to ensure accurate and appropriate use of data and perform quality control of database activities
  • Design maintainable databases for highly available and reliable solutions to meet service levels
  • Work with Architecture and Development teams to test and benchmark new versions, patches and software components for functionality and reliability
  • Troubleshoot and correct problems discovered in production databases
  • Follow change management procedures and help to create policies and best practices for all database environments
  • Analyze and make decisions about optimization of existing schemas and queries for all RDBMS database instances
  • Create and publish design documents, usage patterns, and cookbooks for user community
  • Maintain involvement in continuous improvement of Big Data solution processes, tools and templates

Read more »

Big Data Engineer in Livermore, CA

Please send resume to with rate and contact details

Big Data Engineer
Livermore, CA
Long Term
Rate: Open DOE

Job Description:
· 7 + years of experience in
o Big data stack – Hadoop, Spark, NoSQL. Strong fundamental knowledge of internals! No superficial concepts.
o Strong programing skills in python and SQL.
o Experience building data governance and security using core hortonworks tools like Knox, Ranger, Kerberos
o Performance tuning for Hadoop, spark.
o Understanding of containerization framework – docker, Mesos
o Basic Understanding of data science concepts
o Experience building big data stack in cloud including ha/DR/data synch strategies using Kafka/NiFi
o Ability to support Infrastructure engineer for infrastructure automation of big data stack
o Strong familiarity with Hortonworks HDP stack even if there is not a working experience on HDP as long as engineer has equivalent open stack experience

Read more »

Java Full Stack Developer in Boston,MA (Full Time)

Please send resumes to

Job Title: Java Full Stack Developer

Location: Boston, MA

Job ID: DMS_635115

Skill: Bigdata

Job Description: • Full stack application development in JEE (MUST) • Big Data expertise – Hadoop, Hive, Spark, Oozie, NoSQL (HBase/Cassandra), etc. • JavaScript, HTML5, ReactJS and NodeJS expertise • Design Pattern usage skills • Webservice (RESTful) expertise • Expertise in Database (Oracle database, ERD, Normalization, performance tuning, SQL, PL/SQL) • OLTP and OLAP skills • Designing using UML • ability to lead development (Agile/non-Agile Iterative development) • Expertise in Agile development • Challenge taker and independent thinker

Read more »

Hadoop Engineer in Madison,WI (Full Time)

Please send resumes to

Job Title: Hadoop Engineer

Location: Madison, WI

Job ID: DMS_635285

Skill: Bigdata

Job Description:

  • Experience with Hadoop (HDinsight). [Mandatory] • Experience with Map-Reduce, Zookeeper, HDFS, Pig, Sqoop and Hive. [Mandatory] • Experience with Azure cloud services. [Mandatory] • Experience with scheduling tools like Control-M and Oozie. [Mandatory] • Experience monitoring Hadoop cluster performance . [Preferred] • Experience with SSIS and SQL Server. [Preferred] • Experience with programming language like Python or C#. [Preferred] • Knowledge of Linux system monitoring and analysis. [Preferred] • Understanding of ETL principles and how to apply them within Hadoop. [Preferred] • Knowledge of Spark and understanding of DAGs a plus. [Preferred]

Read more »

Principal Hadoop Software Engineer – Full Time



This is an urgent position with our client and they are willing to hire ASAP. Kindly go through the requirement and if you are available and interested for this requirement please feel free to reach me at or 302-401-1081.


Job Title: Principal Hadoop Software Engneer

Location: Santa Clara, CA

Job Type: Full Time Permanent 


Job Description:


About Oracle Utilities (Opower)

At Oracle Utilities (Opower), we’re applying cutting-edge computer science to one of humanity’s greatest challenges: Energy. Our utility customers in the United States and abroad give us energy usage data for tens of millions of their customers, which we then analyze and aggregate using state-of-the-art tools such as Hadoop, HBase, and Spark. If you are a top-notch engineer looking for a fast-paced place to work while being surrounded by highly skilled and driven peers, then Opower is the place for you.

About the Job

The Opower Data Platform team is responsible for all of the big data infrastructure that powers our SaaS analytics platform. Our team manages the services that ingest hundreds of millions of smart meter data reads a day and other key customer data. We provide Hadoop clusters for running analytics and machine learning algorithms, maintain BI and reporting tools, and run web services to make all of this data available to developers inside and outside of Oracle.


We are looking for an expert big data engineer who can help drive the next version of our architecture to support new sources of data and new end-users.


  • Help set a broad vision for the Opower Data Platform. You will need to understand how all of the pieces of the platform work together to deliver a cohesive product suite.
  • Be a hands-on engineer. We expect our top engineers to contribute to the code base, both working on key components themselves and participating in code reviews with others.
  • Mentor other engineers on the team about best practices in a modern data architecture.
  • Work across Oracle Utilities to understand the needs of other development teams and adjust our architecture and roadmap accordingly.

About You

  • At least 7-10 years of professional experience as an engineer, with substantial Java and/or Python experience
  • Expert in Apache HBase and Apache Zookeeper
  • Strong working and theoretical knowledge of MySQL and/or Oracle Database, including common optimizations (analysis of query plans, indexing, etc.) and data warehouse modeling techniques
  • Experienced with batch processing applications using technologies such as Hive UDF’s, Oozie workflows, and Spark applications
  • Strong background in other distributed applications such as Apache Spark preferred
  • Hands-on Hadoop administrative experience preferred

Thanks and Regards,


Justin Mathew

Vice President

Enormous Enterprise LLC

Consulting | Innovation | Management

Phone: 302-401-1081

Read more »

BigData Architect in Phoenix, AZ (Full Time)

Please share resume

Job Title: BigData Architect

Location: Phoenix

Skill: Bigdata

Job Description: •Perfect in Big-Data & Hadoop concepts •Should have idea of Big Data tools Pig, Hive •Should have command on SQL •Should have command on Unix shell Script •Should be good in Java Map Reduce •Should have knowledge and exposure to NoSQL databases •Should have experience in optimizing Hadoop ETL jobs •Good to have knowledge in SPARK, Scala. •Good to have Knowledge in Banking Domain and risk Management system. •Excellent communication, analytical and inter-personal skills

Read more »