Agents Aren't Always the Answer: The Case for AI Workflows
Achieve high ROI and dependable outcomes by embedding AI innovation within structured workflows.
Achieve high ROI and dependable outcomes by embedding AI innovation within structured workflows.
Confidently select reliable, cost-effective AI models for critical applications using a clear research, shortlist, evaluate framework.
AI coding: Helper or full agent? Both useful, but mixing them directly is risky. Keep workflows separate (project/module level). The future leans towards agent command.
Code examples for Building Effective Agents ported and adapted to use Pydantic AI.
Independent LLM calls are parallelisable.
On the mundane advantages of easy artificial intelligence.
Automagically create fashcards from a piece of text with AI.
Comparing RAG over a rich and complex text using GraphRag vs traditional embeddings index.
Using Pydantic and Instructor with OpenAI GPT-4o to use the LLM as a software device for implementing different tasks.
A very brief introduction to language and models and prompting
Using an LLM to summarise and cluster articles (works better than embeddings and traditional ML clustering).
For most users of AI or developers using AI to build, "open-source" large language models are not very interesting
Take a ToDoList class and chain it to GPT-3.5 by passing the methods for interacting with the list as LangChain tools or OpenAI function calls.
Multiplex between multiple Open AI clients and Azure Open AI deployments + track and attribute the cost of the requests per user and/or endpoint.
Everything I learned (so far) about grounding LLMs with retrieval augmented generation to generate output that this accurate and relevant.
We know that we can ask GPT questions. But how about it asking us? In this example GPT-3.5-turbo is working for us as a waiter in a restaurant.