ChatGPT and other AI tools based on Large Language Models (LLMs) have grabbed the headlines for their ability to write poems, short stories and other kinds of content – including code. The intuitive interactive interface makes them easy to use and they can be a real time-saver.
However, the code LLMs write is also often wrong – in ways that can be hard to spot – so they require constant human supervision. But LLMs are not the only way to do AI for code: fully autonomous code-writing is possible by using reinforcement learning to write code that is guaranteed to compile, run and be correct.
In this webinar, run in partnership with InfoQ, we looked at some of the pros and cons of LLM-based coding tools and a specific example – writing Java unit tests – of where reinforcement learning is a more effective approach.
Watch now to:
- Learn why the tech behind LLMs is good for some tasks, but not others, and how reinforcement learning (RL) differs
- See how RL can write Java unit tests completely autonomously
- Listen to the Q&A where we respond to attendee questions on these topics