The complete guide to AI for product design

An in-depth guide to the current landscape of AI tools and how they can fit into your design workflow

Henry Dan

Sep 12, 2025

You have an unfair advantage right now.

It seems like every designer I talk to is some mix of curious, confused, and concerned about the future.

  • Will AI take my job?

  • Can I use AI for design?

  • Where do I even start?

The people who spend their time experimenting with AI tools, learning, practicing, and exploring what they're capable of are going to be the designers that every company will be fighting to hire, and they'll have the most job security as the world continues to change.

You might already be using AI tools, but you're probably not getting as much out of them that you could be. I wrote this guide to map out how I use design tools in my day to day work as a product designer to grow my freelance business, and the different ways you as a designer can benefit from AI tools in your workflows.

Just like any tool, there's things AI is good at and there are things it struggles with. But the best way to learn is to get started and the best time to start is right now.


What’s next for designers and how to prepare

Based on my observations and experience, here are my predictions:

The designer role isn't going anywhere. Creative problem solving, product sense, and an eye for taste and craft will always be valuable. Even when AI can generate top tier designs someone will still need to be in charge of prompting the correct goals and evaluating its output with a close eye.

But…

AI fluency will be the bare minimum. This is the new internet, the new email. AI is changing every part of the product process, and we will be expected to use the most efficient and powerful tools to get our work done.

Expectations will rise, without the guidance to get there. Companies will expect their teams to be using AI to get more done more quickly, even if they aren't able to give instructions on how we should make that happen.

Generalists will win, specialists will struggle. With AI tools enabling designers to write PRDs, make code prototypes, and synthesize research insights, the expectation will be that we have the time and ability to do more than just make static mockups in Figma.

We can't live in Figma anymore. We may not have to become full design engineers or product managers, but we will be pushed to be more and more involved in the entire product lifecycle.


Ok, so what should I be doing now?

Start looking at your current workflow: Where could AI fit in? What are some of the challenging, frustrating, or redundant parts of your work? Starting from these pain points is a great way to think about use cases for AI tools.

Set aside time to play: The best way to learn how these tools work and how they can work for you is to try them out and stress test them with real projects. Block out time in your week just for experimenting and personal projects.

Use AI to learn new skills: AI can help you learn how to write code, learn new software, or anything you want. It can help answer questions and even act as a mentor to make it easier to pick up a new skill. Just pick something that interests you most and explore.

Share what you learn: Share what you're learning with your coworkers and your company and become the “AI person” on your team. Post on Linkedin and Twitter/X. Make personal projects and share them. Be loud about these skills so people will notice.

Follow thought leaders: Find the people online that are doing what you're doing at a high level. These are the trailblazers you want to watch and learn from. This is how you can keep in the loop on what the next big thing is, or how to get the most out of your favorite tools.

It’s a lot, but it’s a solvable problem. The key is to figure out what you’re interested enough in to learn more about and dedicating the time to go down that rabbit hole.


How AI Fits Into My Workflow

Here are some of my main day to day use cases for AI tools:

Generating code prototypes: This is the biggest game changer. I can use a tool like v0 to prototype custom components, landing pages, UIs, and even full flows so that I’m not just handing off a Figma file to my developers. Claude can even natively prototype AI-powered tools, so you can one-shot prompt your own apps.

As a second brain: AI is great at cataloging information and then summarizing it back to you. For example I use Granola to record meetings and take notes, and then I can chat with it to ask questions about the meeting later. I use mymind to store bookmarks because it can auto tag them so I can always find what I'm looking for. And I'll train Claude projects on specific client projects so I have a go-to assistant to answer questions.

As an interviewer/brainstorming partner: When I have a rough idea that I'm brainstorming on I’ll prompt Claude to help me think it through. For example, if I have an idea for a user flow I’ll describe it to Claude and ask for its feedback: what are pros and cons, what am I not considering, what are other ideas, what follow up questions do you have? Sometimes I’ll use chatgpt voice and just yap into it to brainstorm, then ask it to summarize our conversation in a doc that I can refine the ideas.

No more lorem ipsum: With AI, no one should ever use lorem ipsum or generic stock photos again. If I have a table of data in figma I’ll put a screenshot of my UI design into Claude and ask it to write out example content for it. If I need placeholder images for a website I'll generate specifically what I'm looking for in Visual Electric instead of wasting time finding a stock photo that's just good enough.

As a mentor: I have a "design mentor" Claude project that's trained on books, podcasts, and articles from designers I look up to so that I can ask it for feedback and advice. When I generate code with v0 I’ll ask the AI to explain how everything works so I can learn while I'm building. You can even highlight specific parts of the code to explain it. And if I have a big presentation I'll record myself practicing and upload it to Claude for feedback so I can prep beforehand.

Web search and research: When I don’t know where to start with a big question I’ll have Claude run deep research to scour hundreds of web sources to weigh options. I can ask it something like “What’s the best way to organize Figma designs for a big project” or “I want to learn Cursor, where should I start?” and it will give me a huge report based on hundreds of sources. Chatgpt, Gemini, and Perplexity are all also really good for plain language search and research.


UI Design with AI

The good and bad news right now is that AI can only generate good enough UIs on its own. There's still some manual effort required at the start to prompt correctly and at the end to edit what it generates, but it can still be really useful in your workflow.

Tools I've tried that I recommend:

Magicpath: This one feels the most like Figma and is a great place to start. You get an open canvas to generate UIs in, you can create multiple iterations of an idea, you can share prototypes, and you can generate full flows. It supports multiple breakpoints and design systems, and it generates the best looking UIs of any tools I've used so far.

Magic Patterns: Similar to Magic Path as a "design focused" generation tool. You can generate full UIs or just individual components, create design systems, test breakpoints, and export right to Figma. Just like Magic path it generates really good looking UIs, and they offer quality of life features like @ mentioning to add components and preset design systems.

v0: Great swiss army knife tool for generating more powerful prototypes. The best thing about v0 is you can create components or full UIs, and you can edit the code yourself. It's like cursor, but you don't have to set up your own dev environment. You also have powerful tools to build functional prototypes like adding user authentication, databases, or APIs like Anthropic's to add AI features to your prototype. And it also has a design mode where you can edit design details without prompting.

Lovable: Another tool that generates great looking UIs and is very user friendly for non developers.. Similar overall features as v0, but better for generating a full website or app than single components.

Some tips on using these tools for UI design:

Make sure you're starting with a great prompt: Unless you want AI to just wing it (which can be useful in some cases) it's worth it to write a good requirements doc before jumping into a UI tool. Be specific about what's in scope and out of scope and think about what you want it to focus on. If you're generating a dashboard maybe specify that you don't need the navigation or customization options. If you're prototyping a mobile app then mention that desktop breakpoints are out of scope.

Learn the basics of Tailwind CSS: Tools like v0 and Lovable have a "design mode" that will let you tweak Tailwind CSS variables for the UI, which is faster and cheaper than re-prompting. You can also edit the code yourself, so it really helps to know the basics.

Assume you'll get 60% of what you want: 80 if you're lucky. AI tools don't always have a great eye for design, or consistency, and they definitely don't know your customers as well as you do. So plan to manually edit a good portion of the UI to get it where you want it.

Upload your design system: That being said, more and more tools are supporting custom design systems. v0, Magic Path, and Magic Patterns have options for this, and you can also add custom instructions to maintain consistency across different projects.

Use HTML2design to bring it to Figma: However you generate code with AI, you can use HTML2Design to copy it over to Figma. Magic path and Magic patterns have native Figma export, but HTML2Design gives you more options like auto layout, hover states, and you can export from any AI builder.

These tools can be great for generating a good first draft, quickly exploring ideas, or handing off true-er to life code prototypes. I recommend trying a few different tools and prompts and seeing what's the best fit for you and your workflows.

Tools That Aren't Ready Yet

While I'm all in on AI for design, not every tool lives up to the hype right now. Surprisingly I've found that code generation tools create better UIs than actual UI generation tools.

Figma Make: Despite having direct access to design files, I've never been able to get the results I want from Figma Make. I've seen it struggle with basic tasks like generating a code component from a design component, and it seems to take much longer than other tools. I could see it being useful if I could generate a component and bring it back into Figma, but it's just not there yet.

Google Stitch (formerly Galileo): For a tool that's specifically focused on UI design every layout from Stitch looks the same to me. Like I said, I get better UIs out of code tools.

Uizard: UIzard has some cool tools like generating UIs based on hand drawn wireframes, or generating themes so your UIs can fit into the same system. But the biggest downfall is that there's no Figma export, your UI generations just live in UIzard. It's an interesting tool, but I can't just burn my Figma workflow to the ground.

Replit: Every project I created in Replit took ages to set up because it required all of these boilerplate files, making it painful for quick prototypes or components. v0 feels like it does the basics better, and lets me do much more too.

All in all for the price tag I think you can do better. I really hope to see these tools improve in the future, but right now unfortunately I have to wave the red flag.


UX Design with AI

In addition to generating the UIs, AI can also be really helpful as a thought partner when you're solving tough UX problems:

Brainstorming: At the very first step of a project I'll prompt Claude with the design brief, the client company, and my ideas and ask it to give me follow up questions and come up with its own ideas. This is also a great use for Chatgpt voice mode, where you can just explain ideas out loud without having to make them a formal written prompt.

Note: In my experience Chatgpt voice is even more of a yes man than normal Chatgpt, so make sure you prompt it to give you critical feedback and play devil's advocate so it doesn't just tell you how smart you are.

Research: When starting a project I'll prompt Claude research to look for inspiration and ideas from other existing software in the same niche, or with a similar audience and it will write me a report on other software I should look at and key insights from them. And it cites sources with links so it's easy to continue with my own research.

User flows: When I'm planning out a user flow I'll prompt Claude with my ideas and have it generate a step by step flow. This is also a great use for AI if you've already done brainstorming and research in the same chat, because you've given it the context it needs for the problem space. And once you have a user flow you can continue the chat to brainstorm changes, and even generate a flow diagram with mermaid.js to document everything.

AI has changed my UX design process a lot, I find myself starting with text before sketching anything out. Not only does it help me think through my ideas, but it also forces me to put them into words and get out of my own head. And it's seamless to jump between coming up with an idea and bringing it to life with a prototype.

Getting the Most out of Your Chat

You’re probably already using Chatgpt, but here are a few tips to level up the results from it.

First a couple basics:

Compare models: Don’t limit yourself to Chatgpt. Try Claude, Gemini, and other tools and see what works best for you. Personally I prefer Claude for most tasks, but I like Chatgpt’s voice mode and image generation, and Gemini’s image generation and context window.

Pay for pro: Whatever tool you're using it's worth budgeting the $20 per month to upgrade from the free plan. You'll get more usage to experiment with and access to the most powerful models and tools. If cost is a concern then Gemini may be cheaper for you if you're already paying for Google One. And Gemini is also free for students too.

Ok, you’re using the most current models and you’ve picked the best one for you. Here’s how to prompt it for the best results:

Three Core Principles

  1. Provide the right context: Instead of asking AI to do everything at once, give it specific background about you, your goals, and what success looks like. For example a good way to start a task can be having it interview you to gather more information about your goals before it actually writes its full response.

  2. Give it permission to be wrong: AI tools are designed to please us and agree with us, which can lead to unhelpful responses when it tells us how smart we are. It helps to explicitly tell the AI that only giving accurate answers or just saying "I don't know" is more valuable than trying to answer everything. If you set clear criteria for what constitutes a good response then it can break this "yes man" pattern.

  3. Don't ask it to do too many things at once: Break complex requests into separate steps. For example instead of asking it to build a complete website, first work together on planning the details and structure, and then move to execution as separate tasks.

Quick Tips for prompting

Treat it like a junior employee: Try to give guidance and clear feedback to set expectations on what you're looking for and what you're not.

Use the right tools for the task: Balance using normal chat with modes and tools like web search, deep research, and thinking modes. Each one is better for different use cases.

Set expectations for tone and output: How you prompt it can determine how it responds to you, for example prompting it to give pros and cons if you think it might be over agreeing with you, or asking it to write in full sentences if it uses bullet points too much.

If things get messy start a new chat: You can always copy out the important text into a new chat to “reset”, or even ask the AI to summarize the conversation in a document that you can share as context with the other chat.

Building a Second Brain with AI

Like I mentioned before, one of the things AI is best at is summarizing and searching a lot of information, which makes it great for building a second brain of important information.

Meeting recordings: Granola is my go-to for this. I can record meetings, organize them into projects, and chat with my meetings to recap important information. I also use Granola like Loom videos to record notes and todos so I can reference them later. (You can also level this up by using an MCP server in Claude to generate the actual todos in Notion, Linear, or another project management app).

Bookmarking and inspiration: I use Mymind to bookmark websites and save images and notes. It's a great way to store inspiration to come back to and books or resources, and AI auto tags everything so it's easy to search through and find important stuff. (If you wanted something more customizable you could also use Notion for this).

Custom projects/GPTs/Notebooks: If you have resources you need to reference a lot like documentation or a knowledge base you can make a Claude project trained on that data so you can chat with it and answer questions or get summaries. I train Claude projects for each of my freelance clients, so I can get answers to quick questions about their businesses without bothering my clients. Chatgpt also offers custom GPTs for this, and NotebookLM offers this for Gemini users.

Note taking, summaries, and search: While I like Granola for automating meeting notes, Notion is probably the best tool for taking your own notes and having a powerful AI to search, summarize, and manipulate the data. Craft and heptabase also offer AI features, and if you want open source you can extend Obsidian with AI plugins.

Image Generation

The real hurdle to get over for making image generation work for actual design workflows is generating something editable. When AI generates an image you’re just getting a flat image file,

Here are a couple tips for getting started:

Reverse engineer a visual style for AI: If you need to generate a lot of images for something, try to plan your visual style around something you know AI is good at. For example preferring 3D icons over smooth vector icons where imperfections are more obvious. Or picking an established art style like water color or line art that would’ve been included in the training data. Or picking an art style where imperfections are less obvious and easier to edit, like collage.

Prioritize editability: You want to make sure you can edit an image that gets generated if you need to make small tweaks, instead of regenerating it from scratch. It can help to generate multiple images for each layer (for example a man hiking would be a man, landscape, and sky image with backgrounds removed) or you can use a vectorization tool like the one from Lottie files to turn it into vectors.

And here are some tools you can try out:

ChatGPT: Chatgpt is a great place to start because it's just one of many tools you get with your pro plan. Image generation is also extremely good, if a little slow at times. I find it works best with an established style or reference like “mac desktop gradient wallpaper", or when editing provided images like “combine this picture of a man with this landscape" or “recreate this photo in a cartoon style”.

Visual Electric: VE is another great starting point for experimentation because you get a familiar canvas like Figma or Illustrator, and access to a wide range of models and preset styles you can try out. You also get built in tools like retexturing, background removal, and markup tools that help with editing.

Nano-banana (Gemini): The most powerful image generation tool right now, and very fast. Similar to GPT it seems to do best editing provided reference material, but can also do impressive things like “Show this object from a different angle” or "show this person in a different pose”.

Midjourney: Midjourney is very powerful, offers style reference codes to save and share styles, and frame to frame video for easy animations. For me the biggest downside is that your generations are public by default unless you pay $40/mo, which makes it hard to justify compared to other tools.

Note: For help prompting image generation tools you can also prompt ChatGPT to write prompts for you, and then copy them over to another tool to generate.

What's Next?

The landscape of these tools is changing every day, and I’m trying to keep up myself. Here are a few things I’m working on exploring right now:

Claude code: The terminal has always been intimidating to me, but I'm really curious to learn how to use Claude code. It levels up vibe coding because it has the full context of all of your project files so you can do more with it, you can run multiple agents at once, and it's included with your Claude pro plan. Plus this is a good excuse to dive deeper into the engineering side of things and maybe even self hosting my own apps for personal use. And there are UIs like Terragon that will hopefully lower the learning curve, and Github desktop which will let me step outside of terminal (thankfully).

Notion AI: As much as I promote creating a second brain, I’m not very good at curating them and I rely on other tools to do that for me. But I'm very interested in the idea of having one source with all of my notes, ideas, recordings, etc. in Notion and explore how I can use Notion's agents to generate organized databases from them and search and ask questions about the data.

NotebookLM: For a long time I knew notebook as the “podcast generator" app, and that wasn't very interesting to me. But after looking into it more it seems like it could be a better fit than Claude for my “knowledgebase" AI use cases. Plus Gemini is supposed to have a larger context window, which would be helpful.

Magicpath / Magicpatterns: These are the two killer apps for vibe UI design, and I’m really curious to experiment with them more deeply. The biggest problems I'm hoping to solve are prototyping full flows, matching existing design systems, and generating more complex/custom UIs, and they both seem to be pretty good at all of these.

Magic Animate: Lottie recently launch Magic Animate which can auto generate animations from your Figma files. Depending on how well it works this could be a game changer for prototypes and marketing videos.

Remotion: Remotion is a code library for creating videos, and AI is really good at writing code, so in theory you can prompt AI to create remotion videos. Again, I'll need to test how well this works but this could be really really cool.

The reality is that what's a unique differentiator today will be the baseline expectation a year from now. But that's not scary—it's exciting. Small teams can now move like they have 5 times the headcount with strategic use of AI. I as an individual can run my freelance agency at a speed it would take me twice as long with 2 junior designers and an assistant to achieve.

I'd rather be sprinting ahead of the pack than sprinting to catch up. And right now, you have just as much opportunity to figure this out as anyone else. The tools are accessible, your company wants you to learn them, and the only thing standing between you and mastery is starting today.

Subscribe for more articles like this

The complete guide to AI for product design

An in-depth guide to the current landscape of AI tools and how they can fit into your design workflow

Henry Dan

Sep 12, 2025

You have an unfair advantage right now.

It seems like every designer I talk to is some mix of curious, confused, and concerned about the future.

  • Will AI take my job?

  • Can I use AI for design?

  • Where do I even start?

The people who spend their time experimenting with AI tools, learning, practicing, and exploring what they're capable of are going to be the designers that every company will be fighting to hire, and they'll have the most job security as the world continues to change.

You might already be using AI tools, but you're probably not getting as much out of them that you could be. I wrote this guide to map out how I use design tools in my day to day work as a product designer to grow my freelance business, and the different ways you as a designer can benefit from AI tools in your workflows.

Just like any tool, there's things AI is good at and there are things it struggles with. But the best way to learn is to get started and the best time to start is right now.


What’s next for designers and how to prepare

Based on my observations and experience, here are my predictions:

The designer role isn't going anywhere. Creative problem solving, product sense, and an eye for taste and craft will always be valuable. Even when AI can generate top tier designs someone will still need to be in charge of prompting the correct goals and evaluating its output with a close eye.

But…

AI fluency will be the bare minimum. This is the new internet, the new email. AI is changing every part of the product process, and we will be expected to use the most efficient and powerful tools to get our work done.

Expectations will rise, without the guidance to get there. Companies will expect their teams to be using AI to get more done more quickly, even if they aren't able to give instructions on how we should make that happen.

Generalists will win, specialists will struggle. With AI tools enabling designers to write PRDs, make code prototypes, and synthesize research insights, the expectation will be that we have the time and ability to do more than just make static mockups in Figma.

We can't live in Figma anymore. We may not have to become full design engineers or product managers, but we will be pushed to be more and more involved in the entire product lifecycle.


Ok, so what should I be doing now?

Start looking at your current workflow: Where could AI fit in? What are some of the challenging, frustrating, or redundant parts of your work? Starting from these pain points is a great way to think about use cases for AI tools.

Set aside time to play: The best way to learn how these tools work and how they can work for you is to try them out and stress test them with real projects. Block out time in your week just for experimenting and personal projects.

Use AI to learn new skills: AI can help you learn how to write code, learn new software, or anything you want. It can help answer questions and even act as a mentor to make it easier to pick up a new skill. Just pick something that interests you most and explore.

Share what you learn: Share what you're learning with your coworkers and your company and become the “AI person” on your team. Post on Linkedin and Twitter/X. Make personal projects and share them. Be loud about these skills so people will notice.

Follow thought leaders: Find the people online that are doing what you're doing at a high level. These are the trailblazers you want to watch and learn from. This is how you can keep in the loop on what the next big thing is, or how to get the most out of your favorite tools.

It’s a lot, but it’s a solvable problem. The key is to figure out what you’re interested enough in to learn more about and dedicating the time to go down that rabbit hole.


How AI Fits Into My Workflow

Here are some of my main day to day use cases for AI tools:

Generating code prototypes: This is the biggest game changer. I can use a tool like v0 to prototype custom components, landing pages, UIs, and even full flows so that I’m not just handing off a Figma file to my developers. Claude can even natively prototype AI-powered tools, so you can one-shot prompt your own apps.

As a second brain: AI is great at cataloging information and then summarizing it back to you. For example I use Granola to record meetings and take notes, and then I can chat with it to ask questions about the meeting later. I use mymind to store bookmarks because it can auto tag them so I can always find what I'm looking for. And I'll train Claude projects on specific client projects so I have a go-to assistant to answer questions.

As an interviewer/brainstorming partner: When I have a rough idea that I'm brainstorming on I’ll prompt Claude to help me think it through. For example, if I have an idea for a user flow I’ll describe it to Claude and ask for its feedback: what are pros and cons, what am I not considering, what are other ideas, what follow up questions do you have? Sometimes I’ll use chatgpt voice and just yap into it to brainstorm, then ask it to summarize our conversation in a doc that I can refine the ideas.

No more lorem ipsum: With AI, no one should ever use lorem ipsum or generic stock photos again. If I have a table of data in figma I’ll put a screenshot of my UI design into Claude and ask it to write out example content for it. If I need placeholder images for a website I'll generate specifically what I'm looking for in Visual Electric instead of wasting time finding a stock photo that's just good enough.

As a mentor: I have a "design mentor" Claude project that's trained on books, podcasts, and articles from designers I look up to so that I can ask it for feedback and advice. When I generate code with v0 I’ll ask the AI to explain how everything works so I can learn while I'm building. You can even highlight specific parts of the code to explain it. And if I have a big presentation I'll record myself practicing and upload it to Claude for feedback so I can prep beforehand.

Web search and research: When I don’t know where to start with a big question I’ll have Claude run deep research to scour hundreds of web sources to weigh options. I can ask it something like “What’s the best way to organize Figma designs for a big project” or “I want to learn Cursor, where should I start?” and it will give me a huge report based on hundreds of sources. Chatgpt, Gemini, and Perplexity are all also really good for plain language search and research.


UI Design with AI

The good and bad news right now is that AI can only generate good enough UIs on its own. There's still some manual effort required at the start to prompt correctly and at the end to edit what it generates, but it can still be really useful in your workflow.

Tools I've tried that I recommend:

Magicpath: This one feels the most like Figma and is a great place to start. You get an open canvas to generate UIs in, you can create multiple iterations of an idea, you can share prototypes, and you can generate full flows. It supports multiple breakpoints and design systems, and it generates the best looking UIs of any tools I've used so far.

Magic Patterns: Similar to Magic Path as a "design focused" generation tool. You can generate full UIs or just individual components, create design systems, test breakpoints, and export right to Figma. Just like Magic path it generates really good looking UIs, and they offer quality of life features like @ mentioning to add components and preset design systems.

v0: Great swiss army knife tool for generating more powerful prototypes. The best thing about v0 is you can create components or full UIs, and you can edit the code yourself. It's like cursor, but you don't have to set up your own dev environment. You also have powerful tools to build functional prototypes like adding user authentication, databases, or APIs like Anthropic's to add AI features to your prototype. And it also has a design mode where you can edit design details without prompting.

Lovable: Another tool that generates great looking UIs and is very user friendly for non developers.. Similar overall features as v0, but better for generating a full website or app than single components.

Some tips on using these tools for UI design:

Make sure you're starting with a great prompt: Unless you want AI to just wing it (which can be useful in some cases) it's worth it to write a good requirements doc before jumping into a UI tool. Be specific about what's in scope and out of scope and think about what you want it to focus on. If you're generating a dashboard maybe specify that you don't need the navigation or customization options. If you're prototyping a mobile app then mention that desktop breakpoints are out of scope.

Learn the basics of Tailwind CSS: Tools like v0 and Lovable have a "design mode" that will let you tweak Tailwind CSS variables for the UI, which is faster and cheaper than re-prompting. You can also edit the code yourself, so it really helps to know the basics.

Assume you'll get 60% of what you want: 80 if you're lucky. AI tools don't always have a great eye for design, or consistency, and they definitely don't know your customers as well as you do. So plan to manually edit a good portion of the UI to get it where you want it.

Upload your design system: That being said, more and more tools are supporting custom design systems. v0, Magic Path, and Magic Patterns have options for this, and you can also add custom instructions to maintain consistency across different projects.

Use HTML2design to bring it to Figma: However you generate code with AI, you can use HTML2Design to copy it over to Figma. Magic path and Magic patterns have native Figma export, but HTML2Design gives you more options like auto layout, hover states, and you can export from any AI builder.

These tools can be great for generating a good first draft, quickly exploring ideas, or handing off true-er to life code prototypes. I recommend trying a few different tools and prompts and seeing what's the best fit for you and your workflows.

Tools That Aren't Ready Yet

While I'm all in on AI for design, not every tool lives up to the hype right now. Surprisingly I've found that code generation tools create better UIs than actual UI generation tools.

Figma Make: Despite having direct access to design files, I've never been able to get the results I want from Figma Make. I've seen it struggle with basic tasks like generating a code component from a design component, and it seems to take much longer than other tools. I could see it being useful if I could generate a component and bring it back into Figma, but it's just not there yet.

Google Stitch (formerly Galileo): For a tool that's specifically focused on UI design every layout from Stitch looks the same to me. Like I said, I get better UIs out of code tools.

Uizard: UIzard has some cool tools like generating UIs based on hand drawn wireframes, or generating themes so your UIs can fit into the same system. But the biggest downfall is that there's no Figma export, your UI generations just live in UIzard. It's an interesting tool, but I can't just burn my Figma workflow to the ground.

Replit: Every project I created in Replit took ages to set up because it required all of these boilerplate files, making it painful for quick prototypes or components. v0 feels like it does the basics better, and lets me do much more too.

All in all for the price tag I think you can do better. I really hope to see these tools improve in the future, but right now unfortunately I have to wave the red flag.


UX Design with AI

In addition to generating the UIs, AI can also be really helpful as a thought partner when you're solving tough UX problems:

Brainstorming: At the very first step of a project I'll prompt Claude with the design brief, the client company, and my ideas and ask it to give me follow up questions and come up with its own ideas. This is also a great use for Chatgpt voice mode, where you can just explain ideas out loud without having to make them a formal written prompt.

Note: In my experience Chatgpt voice is even more of a yes man than normal Chatgpt, so make sure you prompt it to give you critical feedback and play devil's advocate so it doesn't just tell you how smart you are.

Research: When starting a project I'll prompt Claude research to look for inspiration and ideas from other existing software in the same niche, or with a similar audience and it will write me a report on other software I should look at and key insights from them. And it cites sources with links so it's easy to continue with my own research.

User flows: When I'm planning out a user flow I'll prompt Claude with my ideas and have it generate a step by step flow. This is also a great use for AI if you've already done brainstorming and research in the same chat, because you've given it the context it needs for the problem space. And once you have a user flow you can continue the chat to brainstorm changes, and even generate a flow diagram with mermaid.js to document everything.

AI has changed my UX design process a lot, I find myself starting with text before sketching anything out. Not only does it help me think through my ideas, but it also forces me to put them into words and get out of my own head. And it's seamless to jump between coming up with an idea and bringing it to life with a prototype.

Getting the Most out of Your Chat

You’re probably already using Chatgpt, but here are a few tips to level up the results from it.

First a couple basics:

Compare models: Don’t limit yourself to Chatgpt. Try Claude, Gemini, and other tools and see what works best for you. Personally I prefer Claude for most tasks, but I like Chatgpt’s voice mode and image generation, and Gemini’s image generation and context window.

Pay for pro: Whatever tool you're using it's worth budgeting the $20 per month to upgrade from the free plan. You'll get more usage to experiment with and access to the most powerful models and tools. If cost is a concern then Gemini may be cheaper for you if you're already paying for Google One. And Gemini is also free for students too.

Ok, you’re using the most current models and you’ve picked the best one for you. Here’s how to prompt it for the best results:

Three Core Principles

  1. Provide the right context: Instead of asking AI to do everything at once, give it specific background about you, your goals, and what success looks like. For example a good way to start a task can be having it interview you to gather more information about your goals before it actually writes its full response.

  2. Give it permission to be wrong: AI tools are designed to please us and agree with us, which can lead to unhelpful responses when it tells us how smart we are. It helps to explicitly tell the AI that only giving accurate answers or just saying "I don't know" is more valuable than trying to answer everything. If you set clear criteria for what constitutes a good response then it can break this "yes man" pattern.

  3. Don't ask it to do too many things at once: Break complex requests into separate steps. For example instead of asking it to build a complete website, first work together on planning the details and structure, and then move to execution as separate tasks.

Quick Tips for prompting

Treat it like a junior employee: Try to give guidance and clear feedback to set expectations on what you're looking for and what you're not.

Use the right tools for the task: Balance using normal chat with modes and tools like web search, deep research, and thinking modes. Each one is better for different use cases.

Set expectations for tone and output: How you prompt it can determine how it responds to you, for example prompting it to give pros and cons if you think it might be over agreeing with you, or asking it to write in full sentences if it uses bullet points too much.

If things get messy start a new chat: You can always copy out the important text into a new chat to “reset”, or even ask the AI to summarize the conversation in a document that you can share as context with the other chat.

Building a Second Brain with AI

Like I mentioned before, one of the things AI is best at is summarizing and searching a lot of information, which makes it great for building a second brain of important information.

Meeting recordings: Granola is my go-to for this. I can record meetings, organize them into projects, and chat with my meetings to recap important information. I also use Granola like Loom videos to record notes and todos so I can reference them later. (You can also level this up by using an MCP server in Claude to generate the actual todos in Notion, Linear, or another project management app).

Bookmarking and inspiration: I use Mymind to bookmark websites and save images and notes. It's a great way to store inspiration to come back to and books or resources, and AI auto tags everything so it's easy to search through and find important stuff. (If you wanted something more customizable you could also use Notion for this).

Custom projects/GPTs/Notebooks: If you have resources you need to reference a lot like documentation or a knowledge base you can make a Claude project trained on that data so you can chat with it and answer questions or get summaries. I train Claude projects for each of my freelance clients, so I can get answers to quick questions about their businesses without bothering my clients. Chatgpt also offers custom GPTs for this, and NotebookLM offers this for Gemini users.

Note taking, summaries, and search: While I like Granola for automating meeting notes, Notion is probably the best tool for taking your own notes and having a powerful AI to search, summarize, and manipulate the data. Craft and heptabase also offer AI features, and if you want open source you can extend Obsidian with AI plugins.

Image Generation

The real hurdle to get over for making image generation work for actual design workflows is generating something editable. When AI generates an image you’re just getting a flat image file,

Here are a couple tips for getting started:

Reverse engineer a visual style for AI: If you need to generate a lot of images for something, try to plan your visual style around something you know AI is good at. For example preferring 3D icons over smooth vector icons where imperfections are more obvious. Or picking an established art style like water color or line art that would’ve been included in the training data. Or picking an art style where imperfections are less obvious and easier to edit, like collage.

Prioritize editability: You want to make sure you can edit an image that gets generated if you need to make small tweaks, instead of regenerating it from scratch. It can help to generate multiple images for each layer (for example a man hiking would be a man, landscape, and sky image with backgrounds removed) or you can use a vectorization tool like the one from Lottie files to turn it into vectors.

And here are some tools you can try out:

ChatGPT: Chatgpt is a great place to start because it's just one of many tools you get with your pro plan. Image generation is also extremely good, if a little slow at times. I find it works best with an established style or reference like “mac desktop gradient wallpaper", or when editing provided images like “combine this picture of a man with this landscape" or “recreate this photo in a cartoon style”.

Visual Electric: VE is another great starting point for experimentation because you get a familiar canvas like Figma or Illustrator, and access to a wide range of models and preset styles you can try out. You also get built in tools like retexturing, background removal, and markup tools that help with editing.

Nano-banana (Gemini): The most powerful image generation tool right now, and very fast. Similar to GPT it seems to do best editing provided reference material, but can also do impressive things like “Show this object from a different angle” or "show this person in a different pose”.

Midjourney: Midjourney is very powerful, offers style reference codes to save and share styles, and frame to frame video for easy animations. For me the biggest downside is that your generations are public by default unless you pay $40/mo, which makes it hard to justify compared to other tools.

Note: For help prompting image generation tools you can also prompt ChatGPT to write prompts for you, and then copy them over to another tool to generate.

What's Next?

The landscape of these tools is changing every day, and I’m trying to keep up myself. Here are a few things I’m working on exploring right now:

Claude code: The terminal has always been intimidating to me, but I'm really curious to learn how to use Claude code. It levels up vibe coding because it has the full context of all of your project files so you can do more with it, you can run multiple agents at once, and it's included with your Claude pro plan. Plus this is a good excuse to dive deeper into the engineering side of things and maybe even self hosting my own apps for personal use. And there are UIs like Terragon that will hopefully lower the learning curve, and Github desktop which will let me step outside of terminal (thankfully).

Notion AI: As much as I promote creating a second brain, I’m not very good at curating them and I rely on other tools to do that for me. But I'm very interested in the idea of having one source with all of my notes, ideas, recordings, etc. in Notion and explore how I can use Notion's agents to generate organized databases from them and search and ask questions about the data.

NotebookLM: For a long time I knew notebook as the “podcast generator" app, and that wasn't very interesting to me. But after looking into it more it seems like it could be a better fit than Claude for my “knowledgebase" AI use cases. Plus Gemini is supposed to have a larger context window, which would be helpful.

Magicpath / Magicpatterns: These are the two killer apps for vibe UI design, and I’m really curious to experiment with them more deeply. The biggest problems I'm hoping to solve are prototyping full flows, matching existing design systems, and generating more complex/custom UIs, and they both seem to be pretty good at all of these.

Magic Animate: Lottie recently launch Magic Animate which can auto generate animations from your Figma files. Depending on how well it works this could be a game changer for prototypes and marketing videos.

Remotion: Remotion is a code library for creating videos, and AI is really good at writing code, so in theory you can prompt AI to create remotion videos. Again, I'll need to test how well this works but this could be really really cool.

The reality is that what's a unique differentiator today will be the baseline expectation a year from now. But that's not scary—it's exciting. Small teams can now move like they have 5 times the headcount with strategic use of AI. I as an individual can run my freelance agency at a speed it would take me twice as long with 2 junior designers and an assistant to achieve.

I'd rather be sprinting ahead of the pack than sprinting to catch up. And right now, you have just as much opportunity to figure this out as anyone else. The tools are accessible, your company wants you to learn them, and the only thing standing between you and mastery is starting today.

Subscribe for more articles like this

The complete guide to AI for product design

An in-depth guide to the current landscape of AI tools and how they can fit into your design workflow

Henry Dan

Sep 12, 2025

You have an unfair advantage right now.

It seems like every designer I talk to is some mix of curious, confused, and concerned about the future.

  • Will AI take my job?

  • Can I use AI for design?

  • Where do I even start?

The people who spend their time experimenting with AI tools, learning, practicing, and exploring what they're capable of are going to be the designers that every company will be fighting to hire, and they'll have the most job security as the world continues to change.

You might already be using AI tools, but you're probably not getting as much out of them that you could be. I wrote this guide to map out how I use design tools in my day to day work as a product designer to grow my freelance business, and the different ways you as a designer can benefit from AI tools in your workflows.

Just like any tool, there's things AI is good at and there are things it struggles with. But the best way to learn is to get started and the best time to start is right now.


What’s next for designers and how to prepare

Based on my observations and experience, here are my predictions:

The designer role isn't going anywhere. Creative problem solving, product sense, and an eye for taste and craft will always be valuable. Even when AI can generate top tier designs someone will still need to be in charge of prompting the correct goals and evaluating its output with a close eye.

But…

AI fluency will be the bare minimum. This is the new internet, the new email. AI is changing every part of the product process, and we will be expected to use the most efficient and powerful tools to get our work done.

Expectations will rise, without the guidance to get there. Companies will expect their teams to be using AI to get more done more quickly, even if they aren't able to give instructions on how we should make that happen.

Generalists will win, specialists will struggle. With AI tools enabling designers to write PRDs, make code prototypes, and synthesize research insights, the expectation will be that we have the time and ability to do more than just make static mockups in Figma.

We can't live in Figma anymore. We may not have to become full design engineers or product managers, but we will be pushed to be more and more involved in the entire product lifecycle.


Ok, so what should I be doing now?

Start looking at your current workflow: Where could AI fit in? What are some of the challenging, frustrating, or redundant parts of your work? Starting from these pain points is a great way to think about use cases for AI tools.

Set aside time to play: The best way to learn how these tools work and how they can work for you is to try them out and stress test them with real projects. Block out time in your week just for experimenting and personal projects.

Use AI to learn new skills: AI can help you learn how to write code, learn new software, or anything you want. It can help answer questions and even act as a mentor to make it easier to pick up a new skill. Just pick something that interests you most and explore.

Share what you learn: Share what you're learning with your coworkers and your company and become the “AI person” on your team. Post on Linkedin and Twitter/X. Make personal projects and share them. Be loud about these skills so people will notice.

Follow thought leaders: Find the people online that are doing what you're doing at a high level. These are the trailblazers you want to watch and learn from. This is how you can keep in the loop on what the next big thing is, or how to get the most out of your favorite tools.

It’s a lot, but it’s a solvable problem. The key is to figure out what you’re interested enough in to learn more about and dedicating the time to go down that rabbit hole.


How AI Fits Into My Workflow

Here are some of my main day to day use cases for AI tools:

Generating code prototypes: This is the biggest game changer. I can use a tool like v0 to prototype custom components, landing pages, UIs, and even full flows so that I’m not just handing off a Figma file to my developers. Claude can even natively prototype AI-powered tools, so you can one-shot prompt your own apps.

As a second brain: AI is great at cataloging information and then summarizing it back to you. For example I use Granola to record meetings and take notes, and then I can chat with it to ask questions about the meeting later. I use mymind to store bookmarks because it can auto tag them so I can always find what I'm looking for. And I'll train Claude projects on specific client projects so I have a go-to assistant to answer questions.

As an interviewer/brainstorming partner: When I have a rough idea that I'm brainstorming on I’ll prompt Claude to help me think it through. For example, if I have an idea for a user flow I’ll describe it to Claude and ask for its feedback: what are pros and cons, what am I not considering, what are other ideas, what follow up questions do you have? Sometimes I’ll use chatgpt voice and just yap into it to brainstorm, then ask it to summarize our conversation in a doc that I can refine the ideas.

No more lorem ipsum: With AI, no one should ever use lorem ipsum or generic stock photos again. If I have a table of data in figma I’ll put a screenshot of my UI design into Claude and ask it to write out example content for it. If I need placeholder images for a website I'll generate specifically what I'm looking for in Visual Electric instead of wasting time finding a stock photo that's just good enough.

As a mentor: I have a "design mentor" Claude project that's trained on books, podcasts, and articles from designers I look up to so that I can ask it for feedback and advice. When I generate code with v0 I’ll ask the AI to explain how everything works so I can learn while I'm building. You can even highlight specific parts of the code to explain it. And if I have a big presentation I'll record myself practicing and upload it to Claude for feedback so I can prep beforehand.

Web search and research: When I don’t know where to start with a big question I’ll have Claude run deep research to scour hundreds of web sources to weigh options. I can ask it something like “What’s the best way to organize Figma designs for a big project” or “I want to learn Cursor, where should I start?” and it will give me a huge report based on hundreds of sources. Chatgpt, Gemini, and Perplexity are all also really good for plain language search and research.


UI Design with AI

The good and bad news right now is that AI can only generate good enough UIs on its own. There's still some manual effort required at the start to prompt correctly and at the end to edit what it generates, but it can still be really useful in your workflow.

Tools I've tried that I recommend:

Magicpath: This one feels the most like Figma and is a great place to start. You get an open canvas to generate UIs in, you can create multiple iterations of an idea, you can share prototypes, and you can generate full flows. It supports multiple breakpoints and design systems, and it generates the best looking UIs of any tools I've used so far.

Magic Patterns: Similar to Magic Path as a "design focused" generation tool. You can generate full UIs or just individual components, create design systems, test breakpoints, and export right to Figma. Just like Magic path it generates really good looking UIs, and they offer quality of life features like @ mentioning to add components and preset design systems.

v0: Great swiss army knife tool for generating more powerful prototypes. The best thing about v0 is you can create components or full UIs, and you can edit the code yourself. It's like cursor, but you don't have to set up your own dev environment. You also have powerful tools to build functional prototypes like adding user authentication, databases, or APIs like Anthropic's to add AI features to your prototype. And it also has a design mode where you can edit design details without prompting.

Lovable: Another tool that generates great looking UIs and is very user friendly for non developers.. Similar overall features as v0, but better for generating a full website or app than single components.

Some tips on using these tools for UI design:

Make sure you're starting with a great prompt: Unless you want AI to just wing it (which can be useful in some cases) it's worth it to write a good requirements doc before jumping into a UI tool. Be specific about what's in scope and out of scope and think about what you want it to focus on. If you're generating a dashboard maybe specify that you don't need the navigation or customization options. If you're prototyping a mobile app then mention that desktop breakpoints are out of scope.

Learn the basics of Tailwind CSS: Tools like v0 and Lovable have a "design mode" that will let you tweak Tailwind CSS variables for the UI, which is faster and cheaper than re-prompting. You can also edit the code yourself, so it really helps to know the basics.

Assume you'll get 60% of what you want: 80 if you're lucky. AI tools don't always have a great eye for design, or consistency, and they definitely don't know your customers as well as you do. So plan to manually edit a good portion of the UI to get it where you want it.

Upload your design system: That being said, more and more tools are supporting custom design systems. v0, Magic Path, and Magic Patterns have options for this, and you can also add custom instructions to maintain consistency across different projects.

Use HTML2design to bring it to Figma: However you generate code with AI, you can use HTML2Design to copy it over to Figma. Magic path and Magic patterns have native Figma export, but HTML2Design gives you more options like auto layout, hover states, and you can export from any AI builder.

These tools can be great for generating a good first draft, quickly exploring ideas, or handing off true-er to life code prototypes. I recommend trying a few different tools and prompts and seeing what's the best fit for you and your workflows.

Tools That Aren't Ready Yet

While I'm all in on AI for design, not every tool lives up to the hype right now. Surprisingly I've found that code generation tools create better UIs than actual UI generation tools.

Figma Make: Despite having direct access to design files, I've never been able to get the results I want from Figma Make. I've seen it struggle with basic tasks like generating a code component from a design component, and it seems to take much longer than other tools. I could see it being useful if I could generate a component and bring it back into Figma, but it's just not there yet.

Google Stitch (formerly Galileo): For a tool that's specifically focused on UI design every layout from Stitch looks the same to me. Like I said, I get better UIs out of code tools.

Uizard: UIzard has some cool tools like generating UIs based on hand drawn wireframes, or generating themes so your UIs can fit into the same system. But the biggest downfall is that there's no Figma export, your UI generations just live in UIzard. It's an interesting tool, but I can't just burn my Figma workflow to the ground.

Replit: Every project I created in Replit took ages to set up because it required all of these boilerplate files, making it painful for quick prototypes or components. v0 feels like it does the basics better, and lets me do much more too.

All in all for the price tag I think you can do better. I really hope to see these tools improve in the future, but right now unfortunately I have to wave the red flag.


UX Design with AI

In addition to generating the UIs, AI can also be really helpful as a thought partner when you're solving tough UX problems:

Brainstorming: At the very first step of a project I'll prompt Claude with the design brief, the client company, and my ideas and ask it to give me follow up questions and come up with its own ideas. This is also a great use for Chatgpt voice mode, where you can just explain ideas out loud without having to make them a formal written prompt.

Note: In my experience Chatgpt voice is even more of a yes man than normal Chatgpt, so make sure you prompt it to give you critical feedback and play devil's advocate so it doesn't just tell you how smart you are.

Research: When starting a project I'll prompt Claude research to look for inspiration and ideas from other existing software in the same niche, or with a similar audience and it will write me a report on other software I should look at and key insights from them. And it cites sources with links so it's easy to continue with my own research.

User flows: When I'm planning out a user flow I'll prompt Claude with my ideas and have it generate a step by step flow. This is also a great use for AI if you've already done brainstorming and research in the same chat, because you've given it the context it needs for the problem space. And once you have a user flow you can continue the chat to brainstorm changes, and even generate a flow diagram with mermaid.js to document everything.

AI has changed my UX design process a lot, I find myself starting with text before sketching anything out. Not only does it help me think through my ideas, but it also forces me to put them into words and get out of my own head. And it's seamless to jump between coming up with an idea and bringing it to life with a prototype.

Getting the Most out of Your Chat

You’re probably already using Chatgpt, but here are a few tips to level up the results from it.

First a couple basics:

Compare models: Don’t limit yourself to Chatgpt. Try Claude, Gemini, and other tools and see what works best for you. Personally I prefer Claude for most tasks, but I like Chatgpt’s voice mode and image generation, and Gemini’s image generation and context window.

Pay for pro: Whatever tool you're using it's worth budgeting the $20 per month to upgrade from the free plan. You'll get more usage to experiment with and access to the most powerful models and tools. If cost is a concern then Gemini may be cheaper for you if you're already paying for Google One. And Gemini is also free for students too.

Ok, you’re using the most current models and you’ve picked the best one for you. Here’s how to prompt it for the best results:

Three Core Principles

  1. Provide the right context: Instead of asking AI to do everything at once, give it specific background about you, your goals, and what success looks like. For example a good way to start a task can be having it interview you to gather more information about your goals before it actually writes its full response.

  2. Give it permission to be wrong: AI tools are designed to please us and agree with us, which can lead to unhelpful responses when it tells us how smart we are. It helps to explicitly tell the AI that only giving accurate answers or just saying "I don't know" is more valuable than trying to answer everything. If you set clear criteria for what constitutes a good response then it can break this "yes man" pattern.

  3. Don't ask it to do too many things at once: Break complex requests into separate steps. For example instead of asking it to build a complete website, first work together on planning the details and structure, and then move to execution as separate tasks.

Quick Tips for prompting

Treat it like a junior employee: Try to give guidance and clear feedback to set expectations on what you're looking for and what you're not.

Use the right tools for the task: Balance using normal chat with modes and tools like web search, deep research, and thinking modes. Each one is better for different use cases.

Set expectations for tone and output: How you prompt it can determine how it responds to you, for example prompting it to give pros and cons if you think it might be over agreeing with you, or asking it to write in full sentences if it uses bullet points too much.

If things get messy start a new chat: You can always copy out the important text into a new chat to “reset”, or even ask the AI to summarize the conversation in a document that you can share as context with the other chat.

Building a Second Brain with AI

Like I mentioned before, one of the things AI is best at is summarizing and searching a lot of information, which makes it great for building a second brain of important information.

Meeting recordings: Granola is my go-to for this. I can record meetings, organize them into projects, and chat with my meetings to recap important information. I also use Granola like Loom videos to record notes and todos so I can reference them later. (You can also level this up by using an MCP server in Claude to generate the actual todos in Notion, Linear, or another project management app).

Bookmarking and inspiration: I use Mymind to bookmark websites and save images and notes. It's a great way to store inspiration to come back to and books or resources, and AI auto tags everything so it's easy to search through and find important stuff. (If you wanted something more customizable you could also use Notion for this).

Custom projects/GPTs/Notebooks: If you have resources you need to reference a lot like documentation or a knowledge base you can make a Claude project trained on that data so you can chat with it and answer questions or get summaries. I train Claude projects for each of my freelance clients, so I can get answers to quick questions about their businesses without bothering my clients. Chatgpt also offers custom GPTs for this, and NotebookLM offers this for Gemini users.

Note taking, summaries, and search: While I like Granola for automating meeting notes, Notion is probably the best tool for taking your own notes and having a powerful AI to search, summarize, and manipulate the data. Craft and heptabase also offer AI features, and if you want open source you can extend Obsidian with AI plugins.

Image Generation

The real hurdle to get over for making image generation work for actual design workflows is generating something editable. When AI generates an image you’re just getting a flat image file,

Here are a couple tips for getting started:

Reverse engineer a visual style for AI: If you need to generate a lot of images for something, try to plan your visual style around something you know AI is good at. For example preferring 3D icons over smooth vector icons where imperfections are more obvious. Or picking an established art style like water color or line art that would’ve been included in the training data. Or picking an art style where imperfections are less obvious and easier to edit, like collage.

Prioritize editability: You want to make sure you can edit an image that gets generated if you need to make small tweaks, instead of regenerating it from scratch. It can help to generate multiple images for each layer (for example a man hiking would be a man, landscape, and sky image with backgrounds removed) or you can use a vectorization tool like the one from Lottie files to turn it into vectors.

And here are some tools you can try out:

ChatGPT: Chatgpt is a great place to start because it's just one of many tools you get with your pro plan. Image generation is also extremely good, if a little slow at times. I find it works best with an established style or reference like “mac desktop gradient wallpaper", or when editing provided images like “combine this picture of a man with this landscape" or “recreate this photo in a cartoon style”.

Visual Electric: VE is another great starting point for experimentation because you get a familiar canvas like Figma or Illustrator, and access to a wide range of models and preset styles you can try out. You also get built in tools like retexturing, background removal, and markup tools that help with editing.

Nano-banana (Gemini): The most powerful image generation tool right now, and very fast. Similar to GPT it seems to do best editing provided reference material, but can also do impressive things like “Show this object from a different angle” or "show this person in a different pose”.

Midjourney: Midjourney is very powerful, offers style reference codes to save and share styles, and frame to frame video for easy animations. For me the biggest downside is that your generations are public by default unless you pay $40/mo, which makes it hard to justify compared to other tools.

Note: For help prompting image generation tools you can also prompt ChatGPT to write prompts for you, and then copy them over to another tool to generate.

What's Next?

The landscape of these tools is changing every day, and I’m trying to keep up myself. Here are a few things I’m working on exploring right now:

Claude code: The terminal has always been intimidating to me, but I'm really curious to learn how to use Claude code. It levels up vibe coding because it has the full context of all of your project files so you can do more with it, you can run multiple agents at once, and it's included with your Claude pro plan. Plus this is a good excuse to dive deeper into the engineering side of things and maybe even self hosting my own apps for personal use. And there are UIs like Terragon that will hopefully lower the learning curve, and Github desktop which will let me step outside of terminal (thankfully).

Notion AI: As much as I promote creating a second brain, I’m not very good at curating them and I rely on other tools to do that for me. But I'm very interested in the idea of having one source with all of my notes, ideas, recordings, etc. in Notion and explore how I can use Notion's agents to generate organized databases from them and search and ask questions about the data.

NotebookLM: For a long time I knew notebook as the “podcast generator" app, and that wasn't very interesting to me. But after looking into it more it seems like it could be a better fit than Claude for my “knowledgebase" AI use cases. Plus Gemini is supposed to have a larger context window, which would be helpful.

Magicpath / Magicpatterns: These are the two killer apps for vibe UI design, and I’m really curious to experiment with them more deeply. The biggest problems I'm hoping to solve are prototyping full flows, matching existing design systems, and generating more complex/custom UIs, and they both seem to be pretty good at all of these.

Magic Animate: Lottie recently launch Magic Animate which can auto generate animations from your Figma files. Depending on how well it works this could be a game changer for prototypes and marketing videos.

Remotion: Remotion is a code library for creating videos, and AI is really good at writing code, so in theory you can prompt AI to create remotion videos. Again, I'll need to test how well this works but this could be really really cool.

The reality is that what's a unique differentiator today will be the baseline expectation a year from now. But that's not scary—it's exciting. Small teams can now move like they have 5 times the headcount with strategic use of AI. I as an individual can run my freelance agency at a speed it would take me twice as long with 2 junior designers and an assistant to achieve.

I'd rather be sprinting ahead of the pack than sprinting to catch up. And right now, you have just as much opportunity to figure this out as anyone else. The tools are accessible, your company wants you to learn them, and the only thing standing between you and mastery is starting today.

Subscribe for more articles like this