This role offers the opportunity to be part of a digital transformation journey, working in a high-performing, self-managed team that encourages collaboration, empathy, and a one-team culture. The successful candidate will have the chance to work with cutting-edge technologies, stay up-to-date with the latest advancements in AI and machine learning, and play a crucial role in designing and deploying AI systems.
What you'll do:
As an Enterprise Data Architect focusing on Generative AI, you will be at the forefront of designing robust AI systems. Your role will involve managing all aspects of data handling for these systems - from collection to preprocessing - ensuring optimal quality for model training. You'll also be responsible for developing generative models like GANs and LLMs. A key part of your role will be deploying these models into production environments while maintaining ethical considerations such as bias mitigation. You'll continuously monitor model performance for improvements while staying updated with the latest advancements in AI. Collaborating with various stakeholders for successful project implementation and training junior team members will also be part of your responsibilities.
- Designing the overall architecture of AI systems, ensuring they are scalable, efficient, and meet the specific requirements of the project.
- Ensuring the availability and quality of data required for training AI models. This includes data collection, preprocessing, augmentation, and management.
- Training generative models on large datasets, using appropriate algorithms and techniques to ensure high-quality output.
- Developing and fine-tuning generative models such as GANs (Generative Adversarial Networks), VAEs (Variational Autoencoders), and Large Language Models (LLMs).
- Deploying AI models into production environments and integrating them with existing systems and applications.
- Continuously monitoring the performance of AI models in production, identifying issues, and making necessary adjustments to improve accuracy and efficiency.
- Ensuring that AI models are developed and deployed in an ethical manner, considering issues such as bias, fairness, and compliance with regulations.
- Staying up-to-date with the latest advancements in AI and machine learning, incorporating new techniques and technologies into projects.
- Working closely with data scientists, engineers, product managers, and other stakeholders to ensure the successful implementation of AI projects.
- Training junior members in the team.
What you bring:
As an Enterprise Data Architect specialising in Generative AI, you bring a wealth of experience in designing, training, and deploying machine learning and deep learning models. Your expertise extends to Natural Language Processing techniques for tasks such as text generation, translation, and sentiment analysis. You have a proven track record of successfully deploying AI models into production environments while ensuring their continuous performance monitoring. Your understanding of ethical considerations in AI model development sets you apart. You are familiar with various AI tools and frameworks like TensorFlow or PyTorch. Your knowledge of generative models like GANs or LLMs is extensive. You possess an in-depth understanding of both relational and non-relational databases. Your ability to create complex data models that integrate various data sources is commendable. You are proficient in writing complex SQL queries and understand NoSQL databases. Your experience with ETL tools for handling data extraction, transformation, and loading processes is noteworthy.
- Proficiency in designing, training, and deploying machine learning and deep learning models.
- Expertise in NLP techniques for tasks like text generation, translation, and sentiment analysis.
- Experience in deploying AI models into production environments and monitoring their performance.
- Understanding of ethical considerations and techniques for mitigating bias in AI models.
- Familiarity with AI tools and frameworks such as TensorFlow, PyTorch, and LangChain for developing and deploying AI solutions.
- Knowledge of generative models like GANs (Generative Adversarial Networks), VAEs (Variational Autoencoders), and LLMs (Large Language Models).
- In-depth knowledge of relational (e.g., MySQL, PostgreSQL) and non-relational (e.g., MongoDB, Cassandra) databases.
- Ability to create complex data models that integrate various data sources and support data analytics.
- Proficiency in writing complex SQL queries and understanding NoSQL databases.
- Experience with ETL tools to handle data extraction, transformation, and loading processes.
- You must be a New Zealand Permanent Resident or Citizen to be considered for this position.
Robert Walters endeavours to review all applications in a maximum of five working days. If you have not received correspondence within this timeframe please do not hesitate to contact Sophie Holubicka on +64 9 374 7300.