Job Description
In this role, you will work on building and further developing large-scale data services and data pipelines in Scala. You will develop algorithms to match, merge, and identify anomalies in large datasets, while improving the systems’ simplicity, scalability, and efficiency to handle ever-growing volumes. An important part of the work is to shorten data throughput time and ensure that the information is always as up to date as possible. You will also identify opportunities for improvement, drive proposals for automation, and define requirements and design solutions for new functionality.
What kind of person are we looking for?
A creative problem-solver who quickly identifies and suggests solutions.
Someone who thrives in a technically advanced team and values the exchange of ideas and perspectives.
A person with a solid development background who also has a natural drive to keep growing and learning.
A team player who is happy to share their experience and help colleagues develop.
Qualifications:
Able to wear multiple hats, ´do what it takes´- ability and attitude
Strong programming and design skills
Deep knowledge and extensive experience of the JVM, preferably Scala
Excellent analytical and problem-solving skills
Excellent oral and written communication skills in English
Experience with one or more application frameworks like Spark, Akka, Play Big data ETL and data streaming
Either permanent residency in Sweden, or commitment to moving to Sweden, preferably near Malmö as the work is on-site in Malmö.
Extra Merit qualifications:
Event Streaming, e.g Kafka Ability to design and implement APIs and REST services
NoSQL Databases Experience or knowledge on how to apply Machine Learning or AI on large amounts of data
Information
Language Requirements: Fluent English (mandatory)
Swedish Scope: Full-time
Location: Malmö Start: Immediate
Duration: 6 months, with possible extension
Applying
If you're interested, please use the form below to apply, or contact Felix Arvberger on LinkedIn.