Building an AI-Driven app

Apr 30, 2026 min read

How I Built an AI-Powered Report Reviewer — With AI Doing Most of the Planning

For one of my study assignments, I had to build a web application that uses an external API. I decided to build something actually useful: a tool that reviews internship reports using AI, based on the real assessment criteria from my programme at Erhvervsakademi København.

The result is RubricRater — paste in your internship report, click a button, and get structured feedback in Danish within seconds. Here’s how I built it, and why the process was almost as interesting as the product.


Starting with a conversation, not a codebase

I didn’t open a code editor first. I opened Claude Cowork — a desktop tool that lets you have a proper working session with an AI assistant, with access to your files and the ability to run code.

My starting point was just a rough idea and three .md files — the learning objectives, the report requirements, and the DARE/SHARE/CARE framework that EK uses to assess students. I described what I wanted to build, and we worked through the design together in conversation.

This is where it got interesting. Instead of googling “how to hide an API key in a web app” or “what’s the cheapest way to deploy a backend”, I could just ask. We talked through the architecture — why I needed a backend at all (to keep the API key secret), what Cloudflare Workers are and whether they’d work for this (yes, and they’re free), and how the React frontend would connect to it all.

Claude Cowork also read the three rubric source files and actually researched what makes a rubric work well for an LLM — it turns out the format matters a lot. A vague rubric gives vague feedback. A well-structured one with specific, observable criteria at each performance level gives the AI something concrete to evaluate against.


The plan becomes a file

Once we had the architecture figured out, the whole plan got written into a single file: CLAUDE.md.

This is a pattern I’ve now used twice, and it works really well. CLAUDE.md is a plain text file that sits in the root of the project and contains complete instructions for Claude Code — the separate AI coding tool — to build the entire application from scratch. It covers:

  • The folder structure to create
  • The exact tech stack (React + Vite, Cloudflare Workers, TypeScript, Tailwind)
  • The system prompt and user prompt template for the AI reviewer
  • How the Cloudflare Worker should be implemented
  • How the React components should behave
  • The GitHub Actions configuration for automatic deployment
  • The order in which to implement everything

Think of it as writing a very detailed brief that another developer could follow. In this case, the developer was Claude Code running in my terminal — but the principle is the same. The more precisely you describe what you want, the better the output.


How the application actually works

The app has three layers, each with a distinct job.

The frontend is a React app built with Vite. It’s what the user sees: a text area where you paste your internship report, a character counter (the report has a 12,000-character limit), and a submit button. Once you submit, it shows a loading spinner while the review is being generated, then renders the result as formatted text with headings, bullet points and all.

The Cloudflare Worker is the backend — a single TypeScript file running on Cloudflare’s servers. This is the piece that makes the whole thing secure. The Anthropic API key lives here, encrypted, and never touches the frontend or the git repository. When the frontend sends a request, the Worker:

  1. Validates the report (checks it’s not empty or suspiciously short)
  2. Builds the full prompt by combining the system prompt, the rubric, and the user’s report
  3. Calls the Anthropic Claude API
  4. Streams the response back to the browser in real time

The Anthropic Claude API is the brain. It receives a carefully constructed prompt that tells it to act as an experienced academic assessor from EK, evaluate the report against 11 specific criteria, and return its feedback in a defined structure: overall assessment, criterion-by-criterion breakdown, strengths, weaknesses, improvement suggestions, and 4-6 questions a supervisor could ask the student in the oral exam.

The rubric is the key ingredient here. It defines those 11 criteria — things like how well the student reflects on personal development, whether they connect their tasks to theory from their studies, and how they demonstrate the DARE, SHARE and CARE values. Each criterion has four performance levels with specific descriptions of what the report needs to contain to qualify. That specificity is what makes the AI feedback useful rather than generic.


Deployment: two platforms, minimal effort

The frontend deploys automatically to GitHub Pages every time I push code to the main branch. GitHub Actions runs the build and publishes it — no manual steps needed after the initial setup.

The backend lives on Cloudflare Workers, which has a generous free tier (100,000 requests per day). Deploying it was four terminal commands: install Wrangler (Cloudflare’s CLI), log in, add the API key as a secret, and deploy. The Worker gets a public URL that the frontend points to.

Total cost to run this: zero. Both platforms are free for this scale of usage.


What I took away from this

The most useful shift in how I worked was treating the AI as a collaborator in the planning phase, not just a code generator. By the time Claude Code started writing files, the decisions had already been made — the tech stack, the architecture, the prompt design, the folder structure. The coding part was almost mechanical.

That said, the manual steps still required real understanding. Knowing what a Cloudflare Worker actually is, why CORS matters, how environment secrets work — you still need to understand what you’re building to make good decisions along the way and to debug when something goes wrong.

The tool I’m most likely to keep using is the CLAUDE.md pattern. Writing a thorough brief before touching the keyboard is good practice regardless of whether an AI reads it.


Built with React, Cloudflare Workers, and the Anthropic Claude API. Planned in Claude Cowork, coded with Claude Code.