Senior Software Engineer – Big Data Platform Job Description

Senior Software Engineer – Big Data Platform Job Description Template

Our company is looking for a Senior Software Engineer – Big Data Platform to join our team.

Responsibilities:

  • Research, development and delivery of Data Engineering’s reliability model and framework;
  • Offer Project delivery support, monitoring and reporting as required;
  • Interact with project owners, data engineering team leads and related internal customers to collect and document technical requirements;
  • Support the development and maintenance of data engineering guidelines, policies, standards and process narratives for in-scope business functions;
  • Design and implement the data infrastructure platforms including the data ingestion, data consumption and stream processing on Google cloud;
  • Contribute to the design, development and delivery of the team’s Data Enablement Program roadmap;
  • Oversee the collection and analysis of key metrics and reporting dashboards to monitor enterprise data platform performance and reliability;
  • Oversee at least one component of the enterprise data ingestion pipeline / data API within the organization’s Big Data platform;
  • Drive and take responsibility for the quality of the development and delivery, also mentor and train team members as needed.

Requirements:

  • Solid understanding and experience with any of the relational databases (MySQL, Postgres, SQLServer etc);
  • Hands-on experience in data engineering with a track record of manipulating, processing, and extracting value from large datasets in production;
  • 3+ years of programming experience in core Java/C# with solid understanding of the Software Development Life Cycle;
  • Experience with Apache Kafka, Apache Airflow, Google Cloud Platform and Google BigQuery;
  • Experience with CI/CD and Behavior Driven Development is nice to have;
  • Bachelor’s degree in Computer Science, Software Engineering, Computer Engineering, or other related degree;
  • Experience implementing and using streaming platforms such as SparkSQL, Flink, Storm etc;
  • Deep understanding of large scale distributed systems, design patterns, and object-oriented design principles with good component and modeling ability;
  • Experience with Linux, Docker and Kubernetes is nice to have;
  • 3+ years of production experience with large scale distributed system such as Hadoop and Kafka.