ABN AMRO – Data Scientist (Python/PySpark) (Data Engineering) 604412

Opdrachtgever (MP) ATOS Nederland BV
Afdeling South Unit 07
Werklocatie Amsterdam
Segment ICT | (Atos) BI, Big Data, GIS & ERP
Functie Big Data Engineer
Functietitel ABN AMRO – Data Scientist (Python/PySpark) (Data Engineering) 604412
Aantal posities 1
Aantal uur per week 40,00
Startdatum opdracht 19-7-2021
Einddatum opdracht 31-12-2021
Optie tot verlengen Ja
CV’s aanleveren voor           12-7-2021 10:00
Functiebeschrijving Rate : 80€ – max 85€

Your work environment:
Financial crime is a hot topic. The media seems to report on money laundering or terrorist financing just about every day. The social relevance of our work is just as important to us. We are the gatekeepers of the financial system, hence responsible for keeping the system safe and healthy, which is of fundamental importance for the stability and security of our clients and – by extension- for society as a whole. All of this impacts our relevance as an organisation and our continuity. We accomplish this by lining up the interests of our clients, the bank and society at large. Detecting Financial Crime (DFC) is a relatively new department that is still in the development stage. The fact that the department is in transition will give extra depth to your role. In this dynamic environment, you’ll have the opportunity to help the department get to the next stage and initiate improvements.

The Data Science Innovation team (DSI team) is responsible for data science innovations to create a future proof DFC and bank in general. Our team’s current main focus is on transaction monitoring, but our scope is the broader DFC domain. The DSI team aims to transform the traditional transaction monitoring landscape into a 2.0 version by creating state-of-the-art machine learning technologies (e.g. anomaly detection, network analytics, NLP) that detect financial crime (money laundering, terrorist financing, human trafficking etc.) in an effective way, using machine learning and advanced network analytics.

Your work:
The conversion of 7 existing data pipelines currently written in Python/PySpark into a new PySpark based feature framework, including the corresponding parallel tests and unit tests.
The conversion first requires a breakdown of the current pipeline into smaller logical steps. Key in this beak down is to ensure an optimal re-use of intermediate steps for the various features. The individual steps are written in Python into a method with a predefined method signature. Additionally, the new pipeline uses a different input source and as result the new input source needs additional pre-processing step and parallel tests .

Your profile:
Good working knowledge of Python. Preferably at least 3 years Python experience and 5 years programming experience in total.
Preferably experience with PySpark and/or big data.
Preferably experience with data engineering.
Can work independently.


Apply for this position

Allowed Type(s): .pdf, .doc, .docx