Posts Tagged “Hive”

Data Architect in Atlanta,GA (Full time)

Please share resume to anu@enormousenterprise.com

Role: Data Architect.

Location: Atlanta,GA

Experience: 10 Years

Experience Required:
• Experience in Hortonworks Hadoop Platform
• Knowledge in Airline Domain

Roles and responsibility:

• 10-15 years of working experience with 3+ years of experience as Big Data solutions architect. Needs to have experience with the major big data solutions like Hadoop, MapReduce, Hive, HBase, MongoDB, Cassandra, Spark, .Impala, Oozie, , Flume, ZooKeeper, Sqoop, Kafka, Nifi, etc, NoSQL databases.
• Big Data Solution Architect Certified Preferred. Hands-on experience on Hadoop implementations preferred
• Big Data Certification is a must.
• Work experience on various distributions such as Cloudera, Hortonworks, MapR etc.
• Work experience on both Real Time Streaming and Batch processing.
• Translate complex functional and technical requirements into detailed design.
• Propose best practices/standards with data security and privacy handling experience.
• Knowledge in handling different kinds of source systems and different formats of data.
• Hands-on experience with Hadoop applications (e.g. administration, configuration management, monitoring, debugging, and performance tuning).
• Strong knowledge of major programming/scripting languages like Java, Linux, R, Scala etc. As well as have experience in working with ETL tools such as Informatica, Talend and/or Pentaho.
• Experience in designing multiple data lake solutions with a good understanding of cluster and parallel architecture.
• Experience Cloud Computing.
• To be able to benchmark systems, analyze system bottlenecks and propose solutions to eliminate them;
• To be able to clearly articulate pros and cons of various technologies and platforms;
• To have excellent written and verbal communication skills;
• To be able to perform detailed analysis of business problems and technical environments and use this in designing the solution;
• To be able to work in a fast-paced agile development environment

Read more »

Bigdata Engineer in Scottsdale, AZ (Full Time)

Please share resume to anu@enormousenterprise.com

Role: Bigdata Engineer

Location: Scottsdale, AZ

Job Description: 

  • Strong Java Background with 6 – 7 years of experience – Strong experience in HDFS, Hive, Pig, Java MapR – Hbase, Spark, Sqoop – Shell Scripting, Unix – Experience in SQL – Good analytical skill.

Read more »

Big Data Engineer in Madison WI (Full Time)

Please send resumes to anu@enormousenterprise.com

Role: Big Data Engineer 

Location: Madison WI

Type to Hire: Fulltime

Job Description:

  • Experience with Hadoop (HDinsight).  [Mandatory]
  • Experience with Map-Reduce, Zookeeper, HDFS, Pig, Sqoop and Hive. [Mandatory]
  • Experience with Azure cloud services. [Mandatory]
  • Experience with scheduling tools like Control-M and Oozie. [Mandatory]
  • Experience monitoring Hadoop cluster performance . [Preferred]
  • Experience with SSIS and SQL Server. [Preferred]
  • Experience with programming language like Python or C#. [Preferred]
  • Knowledge of Linux system monitoring and analysis. [Preferred]
  • Understanding of ETL principles and how to apply them within Hadoop. [Preferred]
  • Knowledge of Spark and understanding of DAGs a plus. [Preferred]

Read more »

Big Data Engineer full time in Madison, WI

Please send resume to jmathew@enormousenterprise.com

Role: Big Data Engineer 

Type to Hire: Fulltime

Location: Madison WI

 

  • Experience with Hadoop (HDinsight).  [Mandatory]
  • Experience with Map-Reduce, Zookeeper, HDFS, Pig, Sqoop and Hive. [Mandatory]
  • Experience with Azure cloud services. [Mandatory]
  • Experience with scheduling tools like Control-M and Oozie. [Mandatory]
  • Experience monitoring Hadoop cluster performance . [Preferred]
  • Experience with SSIS and SQL Server. [Preferred]
  • Experience with programming language like Python or C#. [Preferred]
  • Knowledge of Linux system monitoring and analysis. [Preferred]
  • Understanding of ETL principles and how to apply them within Hadoop. [Preferred]
  • Knowledge of Spark and understanding of DAGs a plus. [Preferred]

Read more »

Java Full Stack Developer in Boston,MA (Full Time)

Please send resumes to anu@enormousenterprise.com

Job Title: Java Full Stack Developer

Location: Boston, MA

Job ID: DMS_635115

Skill: Bigdata

Job Description: • Full stack application development in JEE (MUST) • Big Data expertise – Hadoop, Hive, Spark, Oozie, NoSQL (HBase/Cassandra), etc. • JavaScript, HTML5, ReactJS and NodeJS expertise • Design Pattern usage skills • Webservice (RESTful) expertise • Expertise in Database (Oracle database, ERD, Normalization, performance tuning, SQL, PL/SQL) • OLTP and OLAP skills • Designing using UML • ability to lead development (Agile/non-Agile Iterative development) • Expertise in Agile development • Challenge taker and independent thinker

Read more »

ETL Developer in Phoenix, AZ (Full Time)

Please share resume anu@enormousenterprise.com

Job Title: ETL Developer

Location: Phoenix
Skill: Bigdata

Job Description: • 8+ years of experience in Data Warehousing as ETL and big data developer • 6+ years of Hands on experience in implementing ETL core functionality using core ETL Tool like Ab-initio along with UNIX shell scripting • 3+ years of hands on experience working on Big Data Platform using Hadoop components like Hadoop, HBase, Hive, Pig, Oozie and Apache Spark Components such as Spark SQL and Spark Core • 3+ years of experience on MapR Hadoop Platform. Working knowledge on Cloudera Platform • 3+ years of hands on experience working on core Java and Python programming

Read more »

Data Architect (Senior Consultant) in Memphis (Full Time)

Please share resume anu@enormousenterprise.com

Job ID: 602309

Title: Data Architect (Senior Consultant)

Location: Memphis

Job Description: • Experience on loading various types of data such as JSON,CSV into Elastic Search • Experience on simple load and Bulk load in Elastic Search • Experience in Logstash. • Experience on writing Elastic Search queries in Java/Python. • Experience in setting up ELK Stack • Experience in Bigdata such as HDFS, Map Reduce , Spark and Hive is highly desirable • Work with the distributed analytics team to format and ingest analytic output into Elastic Search • Design and implement second level analytics using Elastic Stack • Provide recommendations and design optimal configurations for large-scale deployments • Perform Elastic search performance and configuration tuning • Collaborate with dev team, develop and optimize Kibana visualizations.

Read more »

Vertica (Technical Lead) in Collierville (Full Time)

Please share resume anu@enormousenterprise.com

Job ID: 611898

Vertica (Technical Lead)

Location: Collierville

Job Description: • Total experience minimum 7 years, Overall minimum 3 years of experience in Hadoop • Thorough understanding of Hadoop Cloudera/HortonWorks/MapR and ecosystem components • Thorough understanding of No SQL databases like HBase, Mongo, Cassandra etc • Requirements Gathering, designing and developing scalable big data solutions with Hadoop • Strong technical skills on Spark, HBase, Hive, Sqoop, Oozie, Flume, Java, Pig, Python etc • Good experience with distributed systems, large scale non-relational data stores, map-reduce systems, performance tuning, and multi-terabyte data warehouses • Good experience with Unix shell scripting & Search engines like Solr / ElasticSearch • Practical experience of implementation using M-R framework, databases and SQL • Exposure to Hadoop capacity planning • Strong analytical & problem solving skills; proven teamwork and communication skills • Must show initiative and desire to learn business • Able to work independently and mentor team members • Desirable: Hands-on experience with analytical tools, languages, or libraries (e.g. R) • Desirable: Experience of Vertica/GreenPlum/Netezza/Teradata appliances

Read more »

Software Engineer in Quincy (Full Time)

Please share your resume anu@enormousenterprise.com

Title: Software Engineer

Job ID: ENR_605322

Location: Quincy

Duration: Full Time

Job Description

• Experience with Large Scale distributed computing and Big Data methods, such as MapReduce, Hadoop, Spark, Hive, Impala, Pig, Kafka, or Storm and experience in Object Oriented Design and Development with Java

• Fluency in several programming languages such as Python, Scala, or Java, with the ability to pick up new languages and technologies quickly; 6+ years of Java or related software development experience, developing Big Data applications or backend service architectures

• Ability to Architect, implement data processing in to Data Lake on a AWS Stack – EMR, Kinesis, S3 • Ability to Rapidly architect, design, prototype, and implement architectures to tackle the Big Data and Data Science needs for State Street

• Research, experiment, and utilize leading Big Data methodologies, such as Cloudera, Hadoop, Spark, Hive, Ozie

• Work in cross-disciplinary teams with experts to understand client needs and ingest rich data sources such as financial data, and operational data

• Communicate results and educate others through insightful visualizations, reports, and presentations

• Strong analytical & problem solving skills and strong written and verbal communication skills; ability to work in dynamic team environments and multi-task effectively

 

Read more »

Project Lead/Hadoop in Madison (Full Time)

Please share your resume anu@enormousenterprise.com

Title: Project Lead/Hadoop

Job ID: ENR_604020

Location: Madison

Duration: Full Time

Job Description

Hadoop, Spark, HBase, Hive, Pig, R, etc.. This person has solid skills in Java, C++, Python, Scala, Unix script development, modeling NoSql data stores, and has a good understanding of working with Spark or Hadoop MapReduce type programming. Experience in areas of optimizing management of and deriving insights from non-structured non-relational data, and provide business value from content through improved information management is key. Experience in analyzing text, streams, documents, social media, big data, speech with emerging Hadoop-based big data, NoSQL, Natural Language Processing, Search and Text analytics technologies and techniques. Apply big data technologies such as Hadoop, Spark or Streams with NoSQL data management and related programming languages for analytics and experimentation with large, multi-structured data sets.Appropriate skills include:*Experience with Hadoop and Spark*Experience with Linux RedHat*Experience with higher level programing languages like Java, Sacala, and Python*Knowledge of BigInsights Administration*System Integration

Read more »