Welcome to Hapag-Lloyd, a leading global logistics company. As the fifth largest container liner shipping company in the world, we are here to make sure that the flow of goods never stops. We are an international team of 12,800 employees working across 400 offices in 128 countries.
The Knowledge Center, located in Gdańsk, will function as a hub for innovation and develop state-of-the-art business and technology solutions to help us navigate the future. And we want to do that together with you.
Our Mission - Your Chance
We are on the Mission of building a world class AI Team capable of supporting a world class shipping company like Hapag Lloyd to stay best in class with intelligent customer centric services.
You are passionate about Big Data, AI and Machine Learning? Then come on board, cause we have tons of real business cases and data waiting for you to be brought to life.
For our location in Gdansk we are looking for a
Machine Learning Engineer - Knowledge CenterResponsibilities and Tasks:
- Design Data- and Model-Driven solutions for tasks that currently need human involvement or are too complex for standard engineering approaches.
- You are orchestrating AI Technologies, Analysis Methods and Statistics to unveil the missing links in current processes and solutions and unchain new innovative solutions that can evolve from a simple prototype to full blown intelligent enterprise products.
- You are analyzing big amounts of structured and unstructured data, augment and identify patterns applying state of the art Data-Mining methodologies, tools and libraries.
- You are curiously evaluating and testing new Papers, Libraries, Third Party Solutions and vividly participate in AI Communities like Meetups, Conferences, Hackathons or Kaggle competitions, always looking for new arising opportunities with high innovation potential for Hapag Lloyd.
AI PRODUCT DEVELOPEMNT
- You are a key player when new business case specific AI Modules are developed.
- You analyze and understand the business case, processes and the available data structures
- consult the product teams on the required training data
- support the data engineers on engineering the data cleaning
- lead the feature engineering process
- support the data engineers in building the ETL pipeline
- develop the model experiments and provide the production ready Model
- conduct the operationalization of the model
- participate in the implementation of the learning loop and automatic deployment evaluation of retrained models
- participate in design and implementation of the model performance monitoring
- Support product teams to handle AI Module related incident situations in a fast and solution oriented manner.
AI PLATFORM DEVELOPMENT
- You are key player in building the generic, standardized and highly reusable platform of the Hapag Lloyd Data Science Development and Analysis Stack composed of Tools, Services and Modules, to enable the AI Team as well as Business- and System- Analysts to continuously improve time to market and cost efficiency of AI Solutions.
- You are organizing trainings and information sessions for IT and Business Departments as well as for Public Community Events on various AI Topics to spread the knowledge and awareness about the possibilities, limits and future of AI in Logistic and IT.
Requirements and Qualifications:
A bachelor’s or master’s degree in computer science, business administration, mathematics, physics or other scientific area is preferred, but not required. Much more important is your experience and your attitude.
Two to three years of relevant experience in enterprise level IT that equipped you to communicate effectively with the diverse stakeholders at corporate level is a good starting point.
- You’ll need 1 – 2 years of development hands on experience, preferably with backend involvement. Experiences with Batch, Web service- or Test-driven development is a plus.
- SQL, Python and related Development Stack elements including Git, common IDEs and ML frameworks are important assets for your endeavor. Java, JS or C++ would be helpful but optional.
- Hands on experience with at least one relational DBMS like DB2, SQLite, PostgreSQL is important, while NoSQL or Distributed DBS is initially optional.
- You should be confident working with various data formats from tabular data like CSV to some markup formats like HTML and common transfer formats like xml and json. Hands on experience with at least some MS-Office, image, audio or video formats is also important.
- Cloud experience like AWS or Azure, … and related concepts and protocols of distributed computing would be helpful but are not a must.
DATA SCIENCE EXPERIENCE
- You should be confident applying Linear Algebra (Vectors & Matrice operations), Statistics, Multivariate Calculus like Integration & Derivatives & Gradients & Optimization, and should be able to come up with numeric stable algorithms.
- You’ll need relevant hands on experience in building ETL pipelines.
- So you should have comprehensive experience in loading files or extracting data from databases. Being able to consume data streams or use REST APIs would be interesting but initially optional.
- For ETL you’ll need to be an expert on data cleaning, evaluating basic data statistics, discretize, impute, encode categorical data and other transformation methods.
- You confidently can analyze data to confirm that model assumptions are meat, for various model types.
- You should have expert knowledge on every level of a ML pipeline: Data preparation, adequate visualization, model types, model test & evaluation as well as unsupervised learning methods.
- You should bring expert know how in at least two of the following disciplines: NLP, CNNs, RNNs, GANs … and related ML frameworks like Sklearn, Keras, PyTorch or Tensorflow. More is better.
- Being able to present and explain data with proper diagrams as well as experiences with current data science platforms like Anaconda, Dataiku, Rapidminer… is a must
- You’ll need to be able to understand, explain and discuss complex topics in fluent English. German and any other language is a plus in context of Multilingual – Models.
- You enjoy sharing your expert knowledge with others and thus generate new knowledge.
- As a senior data scientist you feel responsible to support and enable your team colleagues on a professional and personal level to ensure a relaxed and collaborative atmosphere and a continuously improving team performance.
- At least 1 year experience of working in a SCRUM team is a must. But classical project management topics like task breakdown, requirements engineering, Make or Buy analysis , Gant Diagrams … will also be necessary.
- Your good analytical understanding of complex interrelationships and a confident handling of pre-processing and evaluation of large amounts of data will support you in dealing with exciting questions.
- Challenging problems want to be solved by you. You show high commitment and want to make a difference.
- Furthermore, you can present results to your team, our business stakeholders in a simple and understandable way. It makes no difference to you whether in Polish, English or German.
- Strong troubleshooting and problem solving skills.
- Thrive in a fast-paced, innovative environment.
- You enjoy working in an international team and are passionate about new technologies and software.
Hapag-Lloyd Aktiengesellschaft (Spółka akcyjna) Oddział w Polsce
Talent Acquisition & Employer Branding • Mrs Zuzanna Przewiezlikowska
Al. Grunwaldzka 413 • 80-309 Gdańsk