DataSpark was created from a vision to transform Singtel’s rich and unique repository of data into business value and social impact. Our data products and services provide powerful insights and advanced analytics capabilities to businesses, government agencies, and other telecommunication companies. We strive for our analytics to be trustworthy and relevant to our clients while adhering to high standards of data privacy.
We are looking for a Senior DevOps Engineer to join us to own the release engineering and production and development environments. Working with the product development team and consulting delivery team, this position plays a critical role in our mission of building and delivering robust data platforms that incorporate data science and machine learning algorithms and models. This is a great opportunity to grow your software release skills and utilize your infrastructure management expertise.
At Dataspark, you get to work with rich and diverse datasets, cutting edge technology, and you get to see the impact of your results in real business and government decisions, which in turn provide positive social benefit for consumers at a large scale. As a startup that is part of Singtel, DataSpark provides an enviable work environment with spirited trailblazing and industrial countenance. Working alongside creative, energetic and passionate teammates from around the world, you get to be a part of our exciting growth journey as we build the company to the next level.
- Define, scope, size, implement, test, and deploy existing and new infrastructure for both clients and internal teams that processes hundreds of terabytes each day and growing
- Develop, support, and improve tools for continuous integration, automated testing and release management
- Install, configure and customize DataSpark software according to project requirements, including data ingestion, algorithms, APIs, UIs and security
- Design, implement, operate and troubleshoot the automation and monitoring of our infrastructure in multiple environments and multiple data centers owned or rented from cloud providers
- Serve as the subject matter export on infrastructure performance to the company as well as to our clients
- Perform system integration tests, performance tests, technical acceptance tests, and user acceptance tests to ensure proper functioning of deployed systems
- Troubleshoot and resolve issues in multiple environments
- Improve our infrastructure capabilities, optimizing for cost, simplicity, and maintainability
- Experience building and running a mission critical service at scale, including
- Experience in software engineering, release engineering and/or configuration management
- Experience as a systems administrator in a Linux environment
- Experience administrating open source big data systems and frameworks, including Hadoop, Spark, Presto
- Experience in full stack Cloudera Hadoop administration
- Professional experience on operating Amazon AWS Cloud (EMR, EC2, VPC, VPN, EBS, S3, Route53, IAM, AWS CLI etc)
- Experience in building and deploying CI/CD platforms like Jenkins, Artifactory, GitHub, Bamboo and capable to support applications both on-premise and AWS cloud
- Deep technical expertise in DevOps automation tools and scripting, i.e. Python, Ruby
- Strong experience in open source platform, particularly in Kubernetes, Containers.
- Experience in any of the Configuration Management and Deployment tools like — Puppet, Ansible, Chef, Terraform etc
- Experience in logging, monitoring, tracing e.g. Cloudwatch, Elasticsearch/Kibana (ELK), Prometheus/Grafana, New Relic, Data Dog, Dynatrace etc
- Demonstrated experience in software product life cycles, both traditional enterprise software development or agile internet data product development
- Working knowledge of network security, web and network protocols and standards
- Knowledge of information security issues is a plus
- Good knowledge of monitoring systems