At Trellis, we know that behind each file lies someone’s life, story and trust. We treat that trust as a privilege, one that guides every decision we make about artificial intelligence and information handling.
1. Interns, Not Gods
When designing AI systems with clients, we start by asking, "What would you do with a dozen smart interns?"
A good task for an intern must be clearly defined (you know exactly what you are asking the intern to do) and easily evaluated (you can quickly determine if the intern has done a satisfactory job). These principles apply apply well to AI.
Without caution, it's surprisingly easy to slip into treating AI as a kind of omniscient entity, expecting it to make subtle distinctions that even humans find difficult. Keeping the "interns, not gods" distinction front of mind helps avoid this trap.
2. Design for User Responsibility
If AI is the intern, then the user is their manager.
Increasingly, organisations adopt AI policies which emphasise user accountability: "You can use AI, but you remain responsible for the output." This principle is sound.
When designing AI-driven workflows, we strive to reinforce the user's role as the final decision-maker.
3. User-Centred Design
User-centred design has been central to effective product design since the 1980s.
With AI systems, user-centred design remains crucial.
- User-centred design reminds us that AI-based systems are fundamentally software, demanding adherence to software best practices.
- Evaluating AI is challenging because AI is non-deterministic. It can be effective in some contexts but not others. Involving users systematically throughout the development process ensures 360 degree evaluation of all possible impacts.
Our User Strategy Group was invaluable during the implementation of HelpFirst for Citizens Advice Scotland. With Trellis, we're establishing a Teacher Strategy Group to extend and enhance this successful user-centred approach.
4. Supporting, Not Replacing, Existing Workflows and Relationships
Tread lightly. AI should support and enhance current workflows and human relationships, not replace them.
5. Systematic Evaluation
It is easy to cherry-pick successful examples for demonstrations. Consistently reliable AI performance for real-world use is far more demanding.
Systematic evaluation is essential. "This feels better" is insufficient. Reliable metrics on system performance must be established.
This requires clearly defining what success looks like and systematically labelling data accordingly.
Our goal is to build AI-powered systems suitable for widespread public sector use, and in these scenarios, subjective feelings of improvement simply aren’t adequate.
6. Rock-Solid Tech
Data governance and technology architecture must be robust and dependable. Let’s never overlook these essential basics.
Last updated @Last Thursday
Create/Change and HelpFirst are building Trellis in collaboration with the Scottish Government's Learning Directorate, Aberdeenshire and Dumfries & Galloway.