Building on the previous episode, we look at refactoring our background job into a more maintainable object and provide context to the LLM so that we can chain together responses for a more conversational experience.