Vibe coding panoramas in Runchat
Moodboarding tools like Miro and Figjam became ubiquitous in design circles during the pandemic as a way to collaborate on design projects on an infinite shared canvas. The experience of using most of these digital whiteboards is much closer to an actual, physical board than it needs to be. Sticky note tools are everywhere. Images are pasted in, moved around and deleted. Nothing is responsive or contextual. This lack of imagination seems like a missed opportunity. Why can’t we have intelligent digital whiteboards? Why cant we use a visual canvas to both explore ideas, and generate prototypes? Why can’t they be used to combine art with engineering?
tldraw’s makereal project from 2023
One of the biggest influences on Runchat’s design and product philosophy was tldraw’s make real project that made waves in 2023 for doing exactly this. By sending screenshots of the tldraw canvas to GPT 4 along with a fairly simple prompt and rendering the resulting HTML, this canvas suddenly become a space for both artistic exploration and engineering. Designers could draw what they wanted an app to look like, mark it up with some sticky notes or text and voila: a working prototype.
You are an expert web developer that converts low-fidelity wireframes into polished, responsive single-page websites. You receive sketches, diagrams, sticky notes, arrows, flowcharts, and past prototypes—your job is to turn them into complete HTML prototypes.
Your output:
Return a single, self-contained HTML file.
Use Tailwind CSS for styling.
Use <style> for any extra CSS.
Include JS in a <script> tag. Import dependencies via unpkg or skypack.
Use Google Fonts for any custom fonts.
For images, use placehold.co or solid color rectangles.
Design interpretation:
Treat everything in red as an annotation—exclude it from the final prototype.
Use your judgment to decide what should appear in the UI and what is supporting material.
Prioritize elements clearly part of the user interface: buttons, inputs, text, icons, layout hints, etc.
What to aim for:
Make the output visually polished, realistic, and interactive.
Fill in any gaps with standard UX/UI patterns—it's better to guess than to leave something unfinished.
Favor completeness over perfection: designers want to see their ideas brought to life.
You love turning ideas into reality. Do your best work—and make it feel real.
the tldraw make real system prompt
I’ve always wondered why this idea hasn’t yielded more creative fruit. Instead we have the rise of “vibe coding” which takes place almost entirely within conventional text-based software development environments. I suspect the reason for the explosion in popularity of vibe coding over the idea of a visual canvas for describing ideas is that it is a lot easy to just ask for an app and accept whatever gets spat out than it is to think about what you want first and draw it. The former is only requiring someone to define a vague brief for what they want, while the latter requires actually starting to design a response to that brief. But so much is lost with this tradeoff. It is much easier to change a design visually than it is to describe changes with text instructions (I wouldn’t be surprised if “make it look better” is a common prompt in Cursor). Furthermore, designs on a canvas could be easily compared visually, and visual references could be used to guide the output of code generation to (hopefully) improve the quality of generated designs.
Variant AI might be a visual interface for vibe coding UI
Instead of trying to turn a whiteboard into a computer, like tldraw, perhaps there is merit in trying to make no-code prototyping tools more like whiteboards. There is a lot of similarity between moodboards, which are used to define relationships between ideas, and node-based editors, which are used to define relationships between pieces of code. Node-based editors reduce the abstraction of code by representing functions and data as visual elements like nodes and edges. It feels like a natural evolution to make node-based editors even more visual by representing each node not as a set of input forms or text but instead as it’s generated output: images, video, graphical tables or custom ui components generated on the fly. A visual node editor could be an intelligent moodboard, or a canvas for vibe coding. The fact that we can build working prototypes also creates an explosion of space for creativity: we might use our canvas to build some kind of tool, or we could use nodes on the canvas to operate on the canvas itself - creating nodes that allow drawing on the canvas, or editing generated videos, or creating immersive 3D environments, or auto-generating layouts like indesign. The canvas becomes an abstract space of possibility.
Generating an HTML layout from a reference image
Parametric Layouts
One of the most obvious use cases for visual vibecoding is using HTML to generate layout templates. We can write a simple prompt that takes a reference image and attempts to lay out placeholder elements with similar proportions using HTML.
generate a single page html panel layout using the reference image as a guide. Use placeholder elements for images - we will replace these with real images later. Include placeholder text and annotations using similar font sizes to the reference panel.
This abstract template can then be populated with content using a list of images, some text and some simple instructions. We can even generate placeholder images to test the layout. There are several advantages to this over indesign.
Our layout is responsive to changing page size
We can instantly swap content
We can instantly redesign the entire layout (and repopulate it) just by swapping the reference design image
We can publish our layout to the web if we want to
Creating Tools in Runchat
I’ve written about how Runchat can be used to create reusable workflows called Tools before. A tool is just a runchat workflow that is wrapped up into a single node with curated inputs and outputs. Tools allow the complexity of the workflow to be abstracted into a single node on a canvas, and can be called automatically by language models. Unlike a lot of other agent frameworks, Runchat let’s you build tools for agents without writing any code, and you can see exactly how language models use your tool and make changes and overrides if required.
Whenever you run a node in Runchat the underlying calculation always takes place securely on our server. Tools are just nodes, so they also always run on the server. There are a few reasons for this:
Any runchat workflow or tool can be run via API and used in plugins for Rhino or Blender
Tools can safely and securely call third party APIs with encrypted API keys that are never sent to the client
We don’t want it to be possible to share tools with malicious code that runs in the browser
Various performance optimizations
This architecture was intended to support the design and development of automatable workflows. We weren’t so interested in supporting tools that would require human input as these workflows couldn’t be run automatically. However, as Runchat evolves to support more creative use cases it is increasingly operating as a canvas for generating micro-workflows that are only ever run manually. In this scenario, there is an opportunity to explore the idea of building tools designed for humans rather than optimized for language models and cron jobs. What if people could use Runchat to vibecode their own design tools?
Instagram Stuff Playing around with media in Runchat
Parametric Lock In
This idea of building custom UI for design tasks on the fly is quite exciting. It’s nascent but we can already share a few examples of this in practice. Take for instance the task of masking an image for inpainting (regenerating a region of the image). We have a node that does this automatically using a pretrained model and a text prompt with the goal of maintaining a fully automatable image editing pipeline. For instance, you might have 100 images of buildings and you want to change the sky of all of them to a sunset. You can use the mask node to select the “sky” of an image, and it will figure out how to create the mask for you regardless of the content of the image.
The issue with these automatable workflows is they don’t allow for manual inputs. We can’t just draw whatever input we want or it would no longer be parametric and automatable - it would require manual input. Furthermore, to support manually drawing image masks we would need to build a custom node for it. Runchat maintains a very small library of just 5 nodes, and the idea of adding a 6th for a marginal use case is unappealing. But what if someone wants to do this? Why can’t they build a masking tool themselves?
VibeUI
Runchat supports rendering HTML output in an iframe on nodes, so in theory we can build any UI that can be rendered in HTML. Rendering generated HTML does open up the possibility of running arbitrary javascript in the browser, so we need to warn users of this before actually rendering all of the output content. We can generate HTML by returning it from the Code or Agent node, and we can build UI very quickly by using an LLM to vibe code this for us. For instance, to create a tool for masking images, we can simply connect an image to our code node input and then write “return a single page html app that allows us to draw a mask over this image and copy it to the clipboard”. Then we run the node and we’re done. If we wanted to, we could even provide a screenshot of the app UI to an Agent node and get it to code up the same UI like we did with the layout example earlier.
The caveat to this flexibility is that running our node generates the UI instead of running the UI. So anything that we actually create in the UI isn’t available as an output parameter on the node. But on the flipside these UI elements are intended to be manual and so break parametric workflows anyway. To get data out of our node with some degree of security we can use the clipboard API and simply copy and paste back into our main canvas.
Creating a panorama image with Flux then viewing it in 360 with our vibe ui
Custom Visualization
Another use case for vibe coded nodes is building custom tools for visualization. For instance, asking the Code node to “render this image as a 360 panorama” will (sometimes) make a very simple HTML page that uses aframe to create the panorama, right in the Runchat canvas. From this we could screenshot views to use in other generative AI tools, create more immersive presentations in runchat, build simple games and a whole lot more.
Next steps
Allowing people to make both workflow automations and custom design tools in Runchat is something we’re really excited about. There are a tonne of security footguns to figure out, but the goal is to enable people to eventually create nodes that run entirely in the browser with control over input and output parameters. We have a few novel ideas for how to do this, including generating node UI on the fly from prompts instead of code, or writing SDKs to enable securely creating custom input, output and node calculation components.
If you’ve made it this far, try writing a prompt to create your own node UI from a code node in Runchat and let us know how you get on.