You will be part of a company who are investing in their technology and data platform to develop many new revenues generating products, leveraging open source and big data technologies. These include new data integration, advanced analytics, visualisation, aggregation and smart data initiatives that address new customer needs and are highly visible and strategic within the organisation.
These initiatives are using best of breed technologies, such as Hadoop, Spark, HDFS, Kafka, SOLR, Cassandra and AWS along with in-house developed technologies, and the successful candidate will be working in a fast-paced, dynamic team environment, building commercial products which are at the heart of the business.
Duties and accountabilities:
- Design and implement “Big Data” infrastructure for batch and real-time analytics.
- Ensure highly interactive response times. Avoid allowing performance bottlenecks to creep into the system.
- Interpret and analyse business use-cases and feature requests into technical designs and development tasks.
- Be an active player in system architecture and design discussions.
- Take ownership of development tasks, participate in regular design and code review meetings. Be proud of the high quality of your own work.
- Be delivery focused, have a passion for technology and will enjoy offering new ideas and approaches.
- Be able to demonstrate commercial experience on big data/advanced analytics projects
- Knowledge of algorithms, data structures, computational complexity
- Spark, Hadoop, HDFS
- AWS: EC2, EMR, S3 or other cloud offerings
- DevOps tools - Kubernetes, Docker, Jenkins
- Apache Zeppelin