You will look at current data tools/technologies used and aim to increase the capability of data across the business by bringing in newer more innovative tools/technologies.
You’ll bring your in-depth knowledge of big data technologies best practice and a desire to work in a DevOps environment where you will have end-to-end ownership for developing, deploying and supporting your data assets.
- Working within the Investment Banking arm of the business' Data Platform team
- The existing tech stack includes Java, Python, Hadoop, Spark, Hive, Presto and AWS - however, there are no limitations to tech as the goal is to serve as a data/innovation evangelist across the business.
To be successful in this role you will have some of the following skills and experience:
- AWS or equivalent other cloud environments - any or all of EC2, S3, RDS, Dynamo DB, EMR, Redshift, Glue, Athena, Apache Parquet
- Distributed computer frameworks on Hadoop, Spark, distributed SQL, and/or noSQL query engines
- Experience in Python or Java
- machine learning and self-serve analytics principles.