Skip Navigation

BigData Devops Engineer

Primary Location Bangalore, Karnataka Additional Locations Date posted 01/04/2019
Apply Now Job ID: JR0011903

Description:

As a part of the Database Operations team, you will be able to demonstrate and expand your in-depth knowledge of Big Data while working with several engineering and business intelligence teams. The Database Operations team is responsible for all aspects of the deployment and support for the Hadoop /RDBMS /NOSQL platforms both on on-premise and public cloud deployments, including but not limited to security and roles, storage, tool support, upgrades, performance tuning, standardization, documentation, capacity planning, scaling and Level -2 on-call support.

Working on the Database Operations team will give you the unique opportunity to join a team that is building new development, test and production environments from the ground up. This will open opportunities for you to provide input about how the environment is designed, built and standardized. Contribution to the support structure will also be encouraged. This team will be working closely with the data science and Business Intelligence groups in the department as well as groups outside of Business Intelligence.

Desired Skills and Experience:

  • Preferably university-level degree (BSc. / MSc.).
  • Minimum of seven years’ recent, demonstrable experience in database and bigdata management with Cassandra and Hadoop.
  • Minimum of five years’ experience working with public cloud provides like AWS, GCP or Azure.
  • Minimum of three years’ experience with microservices architecture and container platforms like Docker and Kubernetes.
  • Expert knowledge of designing, deploying and maintaining Cassandra and NO-SQL engines.
  • In-depth knowledge of Hadoop, Map/Reduce, Hive, Pig, YARN, HBase and NoSQL and in-memory datastores (Redis, Couchbase, Cassandra, HBase and DynamoDB).
  • Experience in Microsoft SQL and MySQL is a plus.
  • Experience in business intelligence is a plus.
  • Should be pragmatic and show initiative.

Essential Duties/Responsibilities:

  • Develop and Contribute to the design of data ingestion, OLAP and translation jobs for Hadoop, RDBMS and NOSQL platforms.
  • Candidate will use Big Data methodologies, solutions and tools to help organizations optimize their business performance by managing, sorting and filtering volumes of data as well as extracting meaningful value from these large volumes of data.
  • Deploy and manage Cassandra, Hadoop to various hybrid environments (on-prem/public cloud)
  • Contribute to and follow all standards within the Big Data environments
  • Produce recoverable and well documented services for the Big Data environments
  • Have a strong command of all Big Data components, including but not limited to:
    • Hadoop, HDFS, HBase, Kafka, Flume, Hive, Impala, Hue, Map Reduce, YARN, Oozie, Zookeeper, Pig, Spark, Cassandra.
  • Interacts with BI data scientist(s) to understand how data needs to be converted, loaded, compressed and presented
  • Continuously monitor for and implement performance tuning strategies.
  • Review, test and deploy latest versions of the Apache/Cloudera/Hortonworks Hadoop platforms and Apache/DataStax Cassandra.
  • Have a strong understanding and follow data cleansing and data integrity expectations.
  • Strong understanding of Kerberos, Windows AD and security practices for Big data technologies.
  • Work closely with Unix Admins to properly tune big data systems for bare metal and virtual environments and define a standard burn recommendation for infrastructure requirements.
  • Understand the strategic direction set by senior management as it relates to team goals.
  • Use considerable judgment to determine solutions and seek guidance on complex problems.
  • Required to participate in level 2 on-call support up to 24/7. 

Qualifications:

  • B.A./B.S. in Computer Science, or equivalent Management Information Systems Degree preferred, Masters’ is a plus.
  • Strong Shell and python scripting skills
  • 7+ years of experience in Big Data operations, with significant knowledge of the Hadoop and NOSQL platforms (preferably Cloudera, DataStax platforms and alternate open source suite of tools).
  • 5+ years of experience with public, private and hybrid cloud technologies, microservice architecture and Container based implementations.
  • 7+ years working with UNIX / Linux operating systems.
  • 7+ years’ experience with managing, designing and working with Hadoop ecosystem and NOSQL databases such as Cassandra and HBase.
  • 7+ years’ experience of managing ecosystem tools like Hive, Spark, Hue, Pig, MapReduce, YARN, Oozie, Zookeeper.
  • Experience with relational databases such as MS-SQL server, MySQL.
  • IT Operations work experience a plus with DevOps mind set
  • Working knowledge of Data Warehouse practices.
  • Experience with data warehouse lifecycles and methodologies.
  • Good verbal/written communication skills.
  • Proficient with MS Office Professional Suite (Word, Excel and PowerPoint).
  • Ability to work independently and as a team player to produce effective results in a fast-paced dynamic environment.
  • Positive attitude, self-motivated and confident.
  • Demonstrable customer focus attitude.
  • Ability to learn quickly.

Shift:

Shift 1 (India)

Primary Location:

India, Bangalore

Posting Statement:

McAfee prohibits discrimination based on race, color, religion, gender, national origin, age, disability, veteran status, marital status, pregnancy, gender expression or identity, sexual orientation or any other legally protected status.

Apply Now

Sign Up for McAfee Job Alerts

Form

Get the latest job openings delivered to your inbox.

Interested InSelect a job category from the list of options. Select a location from the list of options. Finally, click “Add” to create your job alert.

  • Engineering, Bengaluru, Karnataka, IndiaRemove
  • Devops, Bengaluru, Karnataka, IndiaRemove

What's Happening
at
McAfee?

Check out #LifeAtMcAfee

Explore our Blog