The job below is no longer available.

You might also like

in Maryland Heights, MO

  • $28
    est. per hour
    Merit America 6d ago
    Urgently hiring2.1 mi Use left and right arrow keys to navigate
  • $40
    est. per hour
    Associated Bank - Corp 5h ago
    Urgently hiring8.8 mi Use left and right arrow keys to navigate
  • $65,000 - $70,000
    Verified per year
    Saint Louis Art Museum 5h ago
    Urgently hiring12.2 mi Use left and right arrow keys to navigate
  • $21.71
    Verified per hour
    Saint Louis Art Museum 5h ago
    Good payUrgently hiring12.2 mi Use left and right arrow keys to navigate
  • $40
    est. per hour
    Certara USA, Inc. 5h ago
    Urgently hiring15.5 mi Use left and right arrow keys to navigate
Use left and right arrow keys to navigate

About this job

JOB SUMMARY
Responsible for design, development and implementation of Big Data Projects. Oversee, perform and manage Big Data projects and operations. Resolve issues regarding development, operations, implementations, and system status. Research and recommend options for department direction on Big Data systems, automated solutions, server-related topics. Manages and maintains all production and non-production Hadoop clusters.

MAJOR DUTIES AND RESPONSIBILITIES
• Strong Knowledge in Hadoop Architecture and its implementation.
• Manage Hadoop environments and perform Installation, administration and monitoring tasks
• Install & configure software updates and deploy application code releases to Production & Non-Production environments
• Strong understanding of best practices in maintaining medium to large scale Hadoop Clusters.
• Design and Maintain access and security administration.
• Design, Implement and Maintain backup and recovery strategies on Hadoop Clusters.
• Supports multiple clusters of medium complexity with multiple concurrent users, ensuring control, integrity and accessibility of data.
• Design, Install ,Configure and maintain High Availability
• Perform Capacity Planning of Hadoop Cluster and provide recommendations to management to sustain business growth.
• Design, Implement, maintain Disaster Recovery (DR) methodologies and create documentation.
• Gather business requirements and Design, Implement of multiple Projects along with the Production Support.
• Create Standard Operational Procedures and templates.
• Experience in mentoring other Admin’s.
• Experience in whole Hadoop ecosystem like HDFS, Hive , Yarn, Flume, Oozie, Flume, Cloudera Impala, Zookeeper, Hue, Sqoop, Kafka, Storm, Spark and Spark Streaming including Nosql database knowledge
• Proactively identify opportunities to implement automation and monitoring solutions
• Proficient in setup/using Cloudera Manager as a monitoring and diagnostics tool and also to identify/resolve the Performance issues.
• Coordinate with Development, Network, Infrastructure, and other organizations necessary to get work done
• 24 x 7 On Call pager rotation.
• Good knowledge of Windows/Linux/Solaris Operating systems and shell scripting
• Strong desire to learn a variety of technologies and processes with a "can do" attitude


REQUIRED QUALIFICATIONS
Skills / Abilities and Knowledge
Ability to read, write, speak and understand English.

Ability to communicate orally and in writing in a clear and straightforward manner
Ability to communicate with all levels of management and company personnel
Ability to handle multiple projects and tasks
Ability to make decisions and solve problems while working under pressure
Ability to prioritize and organize effectively
Ability to show judgment and initiative and to accomplish job duties
Ability to use personal computer and software applications (i.e. word processing, spreadsheet, etc.)
Ability to work independently
Ability to work with others to resolve problems, handle requests or situations
Ability to effectively consult with department managers and leaders

EDUCATION
• BS in Information Technology, Computer Science, MIS or related field or equivalent experience.

RELATED WORK EXPERIENCE
• 8-10 years of hands-on experience in handling large-scale software development and integration projects.
• 6+ years of experience with Linux / Windows, with basic knowledge of Unix administration
• 3+ years of experience administering Hadoop cluster environments and tools ecosystem: Cloudera/Horton Works/Sqoop/Pig/HDFS
• Experience in Spark, Kerberos authorization / authentication and clear understanding of cluster security
• Exposure to high availability configurations, Hadoop cluster connectivity and tuning, and Hadoop security configurations
• Expertise in collaborating with application teams to install the operating system and Hadoop updates, patches, version upgrades when required.
• Experience working with Load balancers, firewalls, DMZ and TCP/IP protocols.
• Understanding of Enterprise IT Operations practices for security, support, backup and recovery
• Good understanding of Operating Systems (Unix/Linux), Networks, and System Administration experience
• Good understanding of Change Management Procedures
• Experience managing LDAP, Active Directory or Kerberos
• Experience with hardware selection and capacity planning
• Experience with Java, Python, Pig, Hive, or other languages a plus

PREFFERED QUALIFICATIONS
• Experience in working with RDBMS and Java
• Exposure to NoSQL databases like MongoDB, Cassandra etc.
• Experience with cloud technologies(AWS)
• Certification in Hadoop Operations or Cassandra is desired

WORKING CONDITIONS
Office environment


EOE Race/Sex/Vet/Disability
Charter is an equal opportunity employer that complies with the laws and regulations set forth in the following EEO Is the Law poster:
Charter is committed to diversity, and values the ways in which we are different.