Systems Administrator (Linux)

    KBR
    Colorado springs, CO
    Full-time
    Similar jobs pay $10.20 - $15.39

    Job Description

    Title:

    Systems Administrator (Linux)

    Position Summary

    KBR is seeking to hire an System Administrator (Linux). Thiscandidatewill be required to have astrong understanding of Apache Open Source Eco-System, Infrastructure As A Service (IAAS) (Terraform), Platform As A Service (PAAS) (Docker), DevOps lifecycle management (Gitlab, JetBrains MPS), and Identity and Access Management (Gluu) technologies used to manage virtual private cloud (VPC's) on multiple classified DoD networks and AWS GovCloud. Expert in troubleshooting and configuration of bare metal and virtual private cloud Linux servers, to include protocols and switching, DNS (Route 53), best-practice configurations, PKI, role-based access controls, connectivity and networking concepts. Applies specific knowledge of systems administration concepts, practices and procedures and is required to be available for on-call mission support.

    The Hadoop System Administrator (Hadoop/Linux) is responsible for implementing, managing and administering the overall Hadoop infrastructure adding and removing nodes using cluster monitoring tools, configuring the NameNode high availability and keeping a track of all the running Hadoop jobs. Uses clear knowledge of Hadoop cluster configuration and should be able to implement private and shared datasets using HDFS, Hive, and other cluster services. Recommends system sizing and scaling based on current workloads.

    The System Administrator (Linux) applies a specific knowledge of system administration concepts, practices and procedures. Works on complex assignments and perform a wide range of system administration activities with little to no direct supervision. Must be a self-starter and be able to work under aggressive deadlines. Familiarity with varied concepts, standard best business practices, and procedures. The ideal candidate can plan and execute tasks based on experience, judgment and independent research.

    Responsibilities:
    • Responsible for implementation and support of multiple Private Virtual Cloud (VPC's) running on AWS GovCloud, DoD multiple classified networks and corporate standalone network.
    • Responsible for implementation and support of the IAAS, PAAS and DevOps tool sets used to standup and manage multiple VPC's supporting Hadoop based data lakes.
    • Implements concepts of Hadoop eco system such as YARN, MapReduce, HDFS, Kafka, Zookeeper, Pig, Hive and Accumulo. Responsibilities consisting of design, capacity arrangement, cluster set up, performance fine-tuning, monitoring, structure planning, scaling and administration.
    • Accountable for storage, performance tuning and volume management of Hadoop clusters and MapReduce routines. Manage and monitor cluster file system, Manage system Backups and DR plans
    • Monitor Hadoop job cluster connectivity and performance. Manage and analyze Hadoop log files. File system management and monitoring.
    • Install operating system and Hadoop updates, patches and version upgrades when required.
    • Development activities as required by Big Data project specifications working with Python, Apache Spark and/or Java / Scala
    • Work with Information Assurance engineer to implement and support FedRamp, DoD IA security configuration standards on multiple classified networks and AWS GovCloud. (ACAS, STIG's, Event Logging, security controls, NIST RMF artifacts to support IATT)
    • Participate in configuration and change management planning meetings as subject matter expert.
    • Set up new DevOps users with appropriate integrated development environment.
    • Setup and manage Gitlab repository for DevOps lifecycle management.
    • Work closely with infrastructure, network, database, information assurance, business intelligence and DevOps teams to ensure business applications are readily available and performing within agreed on service levels.
    • Consult with internal and external users to identify and document user objectives
    • Provide root-cause analysis for recurring or critical problems. Support incident response requirements.
    Education/Certification Requirements
    • Bachelor's degree or equivalent in Information Technology, Computer Science, Information Systems, Engineering, Business, or other related scientific or technical discipline with 5 or more year's related professional experience. (Education can be substituted for years of experience).
    • AWS Assoc or higher in Sysops, Develop a plus.
    Qualifications
    • 5+ years of experience in a System Administrator role to include 2+ year experience in a Hadoop environment. Additional experience working in data warehousing and business intelligence tools, techniques and technology is strongly desired
    • 5+ years of Linux systems administration experience, or equivalent experience
    • Strong knowledge and experience with computer hardware including Servers, Network, SAN technologies, I/O subsystems.
    • Experience in AWS computing and storage planning and provisioning tools.
    • Excellent knowledge of Hadoop Architecture and system internals, including updates and features on new versions.
    • Solid experience in Hadoop system configuration and setup with its utilities and tool sets.
    • 1+ years of experience in Experience in Terraform, Docker and Puppet tools, installation options, problem determination and recovery as well as security
    • Strong analytical and troubleshooting skill set.
    • Strong communication, organizational, and time-management skills.
    • Experience writing and debugging shell scripts in a standard UNIX shell
    • Current IAT Level II certification
    • Required to be on-call 24x7x365 and respond in a timely manner
    Clearance Requirements
    • Active DoD Top Secret with SCI eligibility at time of hiring.
    Scheduled Weekly Hours:

    40
    Posting ID: 556644398Posted: 2020-05-21