My kids' first AI
My kids have been using AI for a while now. Mostly through me - “Dad, can you ask Claude…” followed by whatever they’re curious about. They weren’t learning to use AI. They were watching someone else use it.
That’s not how you build literacy. You build it by doing.
The setup
I wanted something I could control completely. Not a Claude or ChatGPT account I hand over and hope for the best. Not a kids’ app that dumbs everything down to the point where it’s useless. Something where I set the rules, pick the models, write the system prompts, and can review every conversation.
Open WebUI does exactly this. It’s an open-source chat interface that connects to any AI provider. Self-hosted, so the data stays on my hardware. Each user gets their own account. I control which models they can access, what the default system prompt says, and I can read every conversation after the fact.
I set it up on a home server running Docker behind Tailscale (so it’s accessible from any device on our network but not the public internet), with Caddy handling TLS. The whole thing took about an hour. The kids see a clean chat interface at a nice URL. I see an admin panel with full visibility.
For the AI backend, I’m using OpenRouter - a single API key that gives access to models from every major provider. The kids’ accounts default to Gemini 2.5 Flash, which is fast, capable, and costs fractions of a cent per conversation. I’ve hidden the expensive models from their accounts entirely.
The system prompt
This is where it gets interesting. Open WebUI lets you set a system prompt per user. The AI doesn’t just answer questions - it answers them the way you tell it to.
For my kids, the system prompt says something like: you’re a helpful learning companion for a child. Explain things clearly. If they ask something age-inappropriate, redirect gently. Encourage curiosity. Ask follow-up questions. Don’t just give answers - help them think.
That last part matters. The default behaviour of most AI tools is to hand you the answer on a plate. For adults using it as a productivity tool, that’s fine. For kids learning how to think, it’s the opposite of what you want. The system prompt shifts the dynamic from “answer machine” to something closer to a patient tutor.
Why not just give them an account?
Three reasons.
Control. The parental controls on consumer AI products are basic. You can’t customise the system prompt. You can’t review conversations after the fact (without logging into their account). You can’t restrict which models they access. You’re renting someone else’s interface and hoping their guardrails match your values.
Cost. A Claude Pro or ChatGPT Plus subscription is $30 USD per month per user. My kids’ usage on OpenRouter costs cents per day. For casual exploration and homework help, the expensive models are overkill.
Literacy. I want my kids to understand that AI is a tool with knobs and dials, not a magic box. When they see that Dad set up the server, chose the model, wrote the prompt that shapes how it talks to them - they start to understand that someone is always making those choices. Even when it’s invisible. Especially when it’s invisible.
Image generation
Text chat is one thing. But the moment my kids could generate images from a description, the engagement went to another level.
Google’s Nano Banana (Gemini Flash Image) is available through OpenRouter like any other model. Kids type a description, the AI generates an image, and they can iterate on it - “make the background blue,” “add Hello Kitty,” “make it look more like Roblox.” It’s the same create-evaluate-iterate loop as text, but visual, which clicks faster for younger kids.
I’ve set it up with its own system prompt - guiding the model to keep outputs age-appropriate and encouraging the kids to be specific in their descriptions. The process of learning to describe what you want precisely enough to get a good result is itself a valuable skill.
The parenting bit
I can read every conversation. I’ve told them that. The same way we handle other technology in the house. Transparency in both directions.
What I’m actually looking for isn’t anything alarming. It’s patterns. What are they curious about? Where do they get stuck? What topics keep coming back? The conversation logs are a window into how they think that I wouldn’t get from asking “how was school?”
There’s a version of this that’s surveillance. That’s not what I’m going for. The goal is awareness - knowing enough about how they’re using the tool to have good conversations about it. “I saw you were asking about space yesterday - what got you interested in that?” is a different interaction to “what did you do today?” and getting “nothing” in return.
AI literacy is the new digital literacy
I got my first computer in 1995. Dial-up internet meant long distance calls to AOL, and then we moved to the Solomon Islands - I didn’t properly get online until around 2000. But I was tinkering the whole time. I’ve written before about how that era built the exact skills that make AI feel natural now - the describe, evaluate, iterate loop that a generation of tinkerers internalised without knowing what they were practising for.
Digital literacy back then meant understanding that the MP3 you downloaded off LimeWire might not be what it claimed to be, that not everything on the internet was true, and that clicking the wrong thing had consequences. You learned it by doing - mostly by getting it wrong first. Kids who built that intuition early navigated the digital world better than kids who didn’t.
AI literacy is the same thing for the next generation. Understanding that AI can be wrong. That it can sound confident while being completely mistaken. That the way it responds is shaped by choices someone made. That it’s a tool, not an authority. That the quality of what you get out depends entirely on the quality of what you put in.
You can teach this theoretically. Or you can give them a safe environment and let them figure it out by doing - the same way most of us actually learned to use computers.
The practical details
For anyone wanting to replicate this:
- Open WebUI - open source, self-hosted, runs in Docker. Supports multiple users with role-based access. Admin can set system prompts, review chats, and control model access.
- OpenRouter - single API key for all major AI providers. Set a credit limit on the key so there are no surprises. Gemini 2.5 Flash is the sweet spot for kids - cheap, fast, and smart enough for anything they’ll throw at it.
- Tailscale - mesh VPN so the interface is accessible from any family device without exposing it to the internet. Free for personal use.
- A machine to run it on. An old laptop, a Raspberry Pi, a NAS - anything that can run Docker.
Total ongoing cost: whatever the kids use on OpenRouter, which at current pricing is negligible.
The setup does require some comfort with Docker and command-line tools. If you’re interested but not sure where to start, feel free to reach out - [email protected]. Drop me a line.
The longer game
My kids are going to grow up in a world where AI is everywhere. In their schools, their workplaces, their daily tools. The question isn’t whether they’ll use AI. It’s whether they’ll use it well - with an understanding of what it is, what it isn’t, and how to think critically about its output.
I’d rather they learn that at home, on a system I control, with guardrails I chose, while they’re young enough to build good habits. The alternative is they learn it from whatever app their friends are using, with whatever defaults some product team in San Francisco decided on.
Defaults matter. I’d rather set them myself.