The video presents a practical examination of agentic workflows and their advantages over traditional visual automation tools like n8n. It emphasizes how tools such as Claude Code can be harnessed for effective automation with minimal input, while also detailing when to use these workflows versus more familiar automation methods.
Definition and Distinction of Agentic Workflows
Workflow Development Process
Comparison of Tools
Planning:
Building:
Testing:
Committing:
n8n Advantages:
Agentic Workflows Advantages:
The video effectively articulates the potential of agentic workflows, especially in contexts demanding complex automation. It provides insights on how to balance the use of coding tools with visual automation to optimize workflow efficiency. The practical examples and detailed breakdowns underscore the relevance of understanding both coding and automation principles in today’s tech landscape.
"There's a lot more nuance to this conversation than most people are making out because it depends on your technical skills, your appetite for risk, and what workflows you're actually looking to automate."
"AI coding tools like Claude Code are exceptionally strong at being able to perform tasks like understanding different API docs, at creating scripts to analyze and match data together."
Everyone's talking about agentic workflows at the moment where you can use tools like clawed code or anti-gravity to automate research, lead genen, document creation, and much more with nothing more than a few basic instructions. In this video, I'll show you how clawed code can be incredibly effective at churning out some pretty complex workflows with very little input. And I'll give you access to a repo that can get you up and running right away. But the hype around aentic workflows is leaving a lot of people confused. Should you completely ditch Nitn and any other workflow tools you're using? And for a lot of people watching this, the answer is probably not. There's a lot more nuance to this conversation than most people are making out because it depends on your technical skills, your appetite for risk, and what workflows you're actually looking to automate. I'll be exploring all of this in the video, and later on, I'll also explain three different ways that you can pair clawed code and any together, cuz sometimes they can be a fantastic combination. This video is all about learning what these tools can do and choosing the right ones for the job. So, what exactly are agentic workflows? Because they can mean a few different things. The simplest definition is that you're using clawed code to create Python scripts that can execute tasks for you. And by the way, while I'm using clawed code, the principles here apply to whatever coding agent you're using. So, that could be Google Anti-gravity or Cursor for example. Throughout this video, as we're building out these agentic workflows, we'll be following a four-step process, and that is plan, build, test, and commit. So, first off, we're going to make sure that the agent will correctly plan both the functional and technical implementation of that particular workflow. Then you can then review it, make sure everything is correct before you then trigger the build phase. After that, the agent can test its own workflow and then potentially iterate on that and improve its own code as well as you actually validating that the thing works. After that, then we can commit that code to git which is a really important part of the process. Then after that, we can go back again, iterate on the same workflow or create other ones. The result of this will be creation of workflows that we can trigger again and again. For example, we have a workflow here that checks the news via an RSS feed. So, it heavily filters it using AI and then it sends the results via Telegram. This happens all on our local machine, but we can also deploy it to the cloud. You can have this run on a schedule, so you never have to touch it, and this will run reliably pretty much every single time. Python projects can be incredibly sophisticated. But to simplify things a bit, you could compare a Python file to an Nitn workflow. Instead of chaining nodes together visually in a workflow, Claude Code would create these Python files. But instead of just running these repeatable scripts, Claude Code is very often used as a very advanced co-pilot. Claude Code has access to your file system. It can spin up sub agents to handle complex tasks. It can search the web. It can communicate with other agents via MCP. It can run commands on your machine and much more. These are both great ways to use cloud code. But generally, when I'm talking about agentic workflows, I'm talking about using cloud code to create, test, and debug these types of Python scripts and iterate on them as necessary. Before we get into building agentic workflows, I want to give you a bird's eye view of where they might fit within your overall automation stack. While I'm comparing these against NN here, many of the same principles apply to other workflow automation tools. Firstly, they can handle complex workflows very easily and usually very quickly. For example, Claude Code put together this workflow in one prompt, whereas it would have taken a very long time to manually wire all of those nodes together in a visual workflow tool. Cloud Code can debug its own code. It can check error logs and then iterate upon itself until it gets it right. So that's a huge huge advantage. If you need to build advanced workflows that require pretty advanced retrieval patterns or harnesses, then usually cloud code would make very quick work of those via coding, whereas it would be very cumbersome within a visual coding tool. At this point, you're probably wondering what's the catch and why would I not use aentic workflows for everything? And there are a few reasons. One is that AI coding agents can go kind of rogue. They can unexpectedly call external APIs or tools by mistake or else during testing. And in some cases that could be really dangerous. Also, if you can't read the code, you're effectively blackbox testing the system. And that can be completely fine in plenty of use cases, but sometimes you really need to know what's going on under the hood, especially for really sensitive operations. For some clients with Nitn dashboard and train them into using it, whereas there's no real easy client handoff with aentic workflows. While it's easy to get up to speed with claw code, you usually need to also understand a bunch of other concepts along the way. One concept that almost everyone building agentic workflows needs to understand is understanding how git works. How to commit code to a repository, how to create branches, and how to revert code if you need to. Deployment can sometimes get a bit complex depending on what approach you're using. Also related to the first point, there's no guard rails by default. So if you need to enforce human loop for a particular task, that's not really included by default in agentic workflows. Later on, I'll be showing you a trick that can help with that. Where does NN still fit into the equation? Well, it's visual and auditable. You can really see and track through every single node as it's happening. It's less technical for simpler workflows or where you're using the built-in integrations. As things get a bit more complex, that's not quite the case. So, I put an asterx at the end of that. They have a massive library of built-in integrations. And you can also use aentic workflows and interact with nitn workflows via web hooks to tap into this as well. And also, nitn abstracts a bunch of concepts that you otherwise don't need to understand. It's easy to deploy, maintain, and version control your workflows. But as mentioned, there's a very real complexity dealing with NN. In many of our past videos, we've really stretched Nitn to its limits, and sometimes it was quite cumbersome to do so. And there are just UI imposed limitations. Some things like looping over items. It's just quite cumbersome to do within NN, whereas it's very easy to do in code, and things can very quickly turn into a rat's nest of nodes if you're doing complex tasks and chaining lots of data together. And finally, if you're operating at scale, such as through lots of executions or lots of data processing, then generally code-based workflows are far superior to nm. Let's get started creating an agentic workflow in claw code. Also, if you want to follow along with this project, then check out the link in the description where I've shared a public GitHub repo that you can clone directly from here. And once you've opened it up, then we can get started. I want to very quickly create a news engine workflow, one that's able to fetch news feeds from potentially multiple different sources and then use AI to heavily filter that news and have a high signal to noise ratio and then send summaries of that to me via Telegram. To get started creating this workflow, I want to make sure that we're in plan mode. So, I'm going to press shift and tab to make sure we're in that mode. And then I'm going to press forward slash new workflow. This for/newworkflow command is calling this skill that's been included within the repo that you can get access to in the description. This effectively provides a blueprint for clawed code to create the initial plan and spec for this particular workflow. So this can really streamline the creation of these workflows, make sure that the AI is on the right track and also make sure that the AI has the requirements up front for how do you want to deploy this kind of workflow? Do you want to keep it local? Do you want to deploy it externally to a service like modal.com for example, which we're not affiliated with, but it is a useful service. First off, it's asking what is the workflow name. So, we're going to type news engine. As a description, I've written a workflow that fetches and heavily filters the news with AI. Send in the results via Telegram. Next up, it's asking for a detailed description or a specification. We can be pretty high level with this. We just want to provide as much detail as required for clawed code to actually execute this task. for this. Ideally, I don't want to use any paid news API. So, I'm going to start by saying this workflow should fetch the top 15 results from the Google RSS news feeds with the following queries. AI news, artificial intelligence, and AI tools. When you have all the items, then remove any duplicates and then filter them according to the following criteria. We're looking out for any new AI models or any major AI tool releases or upgrades uh or any thoughts from AI industry leaders. We want to filter out any listicles or generic blog articles or any investment or stock articles or any local news for example. And then when that's done, I want you to send the top seven news items according to that criteria to me via Telegram. Okay, so that should be good enough for the moment. Next, it's asking us how we want to deploy this workflow. Do we want it to just run locally or to also run locally as well as deploy it for cloud execution? So, cloud execution clearly has the benefit that it will be able to run when the computer is not online. It is possible to schedule locally on your computer as well using crown or taskuler. Um, but having it execute in the cloud is very very handy. So, we've selected local and modal in this case and I am ready to submit the answers. So, let's work from there. Next up is asking, do we want this to run on a scheduled basis or manual or via a web hook? Now it has everything it needs. And now it's using this agent harness in the back end to spin up some explore agents to have a look through the codebase. And now it's going to put together a plan as required to create this workflow. It's now finished its explore phase here and it's now spun up a separate planner agent to create the plan. Now it's asking for me for permission to perform a web search. I'm just going to allow that. So in this case, I've not set it to run with dangerously skip permissions. So I do need to confirm actions as they're happening. Of course, you can do that if you want as well. Ideally, you'll use a sandbox. Okay, after a few minutes of processing, it's come up with a pretty detailed plan, which is great. Um, it has overall context of what it's going to do, the file it's going to create, it's planning to create a new Telegram secret, which is definitely required. Um, the chat ID and a bot token. Again, it's very good that it's already come up with this because again, these are required. and it's going to update this catalog.md file. So, this is a markdown file where Claude Code will be able to keep track of all of the workflows that we have within this project and what stage they're at, if they're planned, um, how we want to schedule them. So, we could initially plan lots of different workflows here and then later on get Cloud Code to then build all of those in one go. So, let's accept the edits here. I'll press enter. And now it's creating the initial scaffold for this project. Now it's not actually building the initial project just yet. It's creating the initial files, the initial scaffolding. So we can just have a quick look through them to make sure everything's okay and then we can go and build. Now on the left hand side, we see it's created this new folder within workflows called news engine. We have an initial scaffold for a Python file and an initial readme file. So again, this is going to give Claus some really good context of what it's doing. Okay. Now a plan has been created. I can go into the news engine workflows section here. Go to read me. So within the environment variables in our project, you see we have this env.ample and it's added in the Telegram bot token and chat ID as an example. So I'm going to add those into myv file. So we just need to basically copy and paste those in. The same needs to be done for your open AI key. You can also get cloud code to use a different AI API or even use local models as well. But for the OpenAI key, just go to your OpenAI dashboard, generate a key and copy that in. Make sure to update the env file. And then for the Telegram bot token and chat ID, it's very easy to set up a Telegram bot. You can do it within a few minutes. Just start a conversation with botfather, select forward/ newbot, and then you get a token and just paste it in here. Then simply start a conversation with your bot and then go to this URL in your browser. Just paste your token in here and then you'll see the chat ID for this particular chat and then paste it in there. That's all you need to get set up with a Telegram bot and get a chat ID specific to you which is really really useful. I currently have these now configured in myv file. So I've just copied this file, removed the example at the end and then filled out the Telegram and OpenAI keys. Okay, quickly for the plan, it's going to fetch the AI news from Google News RSS. It's going to dduplicate. It's going to filter for high signal items and then it's going to send the results. So this plan is looking really really good. I'm happy to go to the next stage. So now on the right hand side, Cloud Code has automatically preempted what we're supposed to do next, which is to type forward/build news engine. So I'm going to press enter on that. And so build is calling this separate skill over here that's also been included within the repo. And this is also another blueprint or checklist that it's going to go through to make sure it stays on track throughout the process. Now, it's working through this plan pretty well. We have this catalog file over here, which if we bring this over to the right, this is basically a markdown table where it's keeping track of all of the workflow items within our project. And you can see this is already marked as in progress. Okay, it's worked through that very, very quickly. I'm just going to proceed and allow it to do a test of the process. Now, it's going to smoke test the entire pipeline. I'm going to accept. There we go. It's worked successfully during its very first smoke test of the workflow, which I'm quite impressed by. Um, I probably shouldn't be too surprised with that because Cloud Code is very powerful, but it is fantastic to be able to just plan out a workflow and build upon it and it just work first time around. So, if we click into one of these and now that goes directly to the news article. So, I'm pretty impressed by how that turned out. So, we can have conversations with Claude Code now to just update the code, update the prompting for the AI to try to improve how it filters these news items or to change how they're presented. But for the moment, I'm going to skip that and just deploy this straight to the cloud. So, it runs every single day. For deployment, I'm going to use modal.com, which I'm not affiliated with, but I have found it very useful to work with and very easy to integrate with. Currently, when you sign up, you get $5 worth of credits. And this goes a very, very long way. Whenever you're running an app, whenever you are calling an endpoint or it runs on a schedule, that runs for a very short space of time usually. So, that will usually use up a very small amount of this credit. So, you can process quite a lot with even this amount. And then, if you add a card, you can also get lots more credits. So, you're pretty unlikely to hit this limit for the type of use cases that we're talking about here. To get set up with this, set up an account on modal.com. Go to the quick start guide and it is ridiculously fast to get set up. You can get clawed code to go through this process, but also you can just go to command line and go pip install modal. And then it goes through the process. After that, you run this command and then a pop-up appears and then you just accept in your web browser. It goes through authentication and then you're good to go. you already are authenticated on your machine. It's already preempted my next move, which is to deploy it to modal. I'm going to press enter and let it go through the process for that. In this project, I've defined a modal bearer token. That means if you're creating any web hooks on the modal side that you can call externally. So, you could call those from an external tool like ClickUp or NAN for example, then you can secure those and make sure that you need an authentication bearer token to access those web hooks. You can ignore that for the moment because we're only adding a scheduled task here. So we actually don't need that and continue along with the process and hopefully this should deploy pretty shortly. So we can see here it looks like it's successfully deployed. Now if I go to modal.com I can see that this news engine app has been created. I'm going to click into this and then from here if I go to run modal we can see that cloud code has created this schedule job and this is now going to run in 18 hours 50 minutes. But I can click run now and then run this manually as a one-off run. So I'm going to do that right now. I'll click run now. It's come down at the bottom right there to say it's scheduled an immediate run. And we can see that it is now running. It's now succeeded. We've now received a message on Telegram. So this external Python code has now run successfully and it will continue to run on a daily basis from now on. If I want to make any updates to that, I go into cloud code. I can simply get it to make the updates. I can test them locally and then deploy when I'm ready. And then very importantly, I want to commit this code to git. So that means that we have proper version control of each of these new features and workflows as we're going along. So I'm allowing it to commit this to our git repository, it's now committed that code to our branch. So now if I go into GitHub desktop, I can see that it has added this add news engine workflow fetches AI from Google RSS. So it's added this commit message as well as a nice description and we see all of the change files within that. If we wanted, we could just rightclick this and revert the changes from that commit. So if we just want to completely roll back those changes, we can do that very easily by using version control. So now this is on our local git repository, but cloud code has also prompted us to push this. So when we do that, this will no longer just be on our local machine and then that will also be saved on the cloud. So I'm going to allow that to proceed. So this version of the codebase is now committed to a private GitHub repo. That means that our agent can now continually add new features and then update this codebase. We can then pull that code from a different computer, for example, or an autonomous agent could even work off this codebase as well. When you're working with code, it's absolutely vital you understand how git works. So just make sure to understand how to create a Git repository, how to commit and push code to it, and then how to be able to look at the history and revert changes if necessary. These are really vital skills, especially as you start to maintain older software or you use autonomous agents, for example, to work off that particular codebase. It's really important to understand the version controlling of your code. To quickly wrap up this news engine example, I've made some improvements behind the scenes to just get better results and to make sure we're not surfacing the same news stories over and over again. So to take a step back, there was one key weakness in the way claw code implemented this particular tool and that is that it did not dduplicate the news stories. So day after day, it could resurface the same news stories again and again. So the question is how do we maintain persistence across runs so the next time this tool runs that it won't surface the same articles or ones that are very similar to it and one pretty neat way to do that is to use a database but we don't want to go through the hassle of setting up an entirely separate database for this an easier way to do it is to just get cloud code to store a database file directly within modal storage and this uses a SQLite database SQLite databases are incredibly useful because you can query them you can create tables and just store them all within a file as you can see here. So now cloud code updated its plan to initialize a SQLite database with a sent articles table. It filters out articles already in the database and after sending it records new articles in the database. So when I download this database file I can open it up in my computer. I have this tool which is called DB browser. I've just opened up that file that I downloaded. If I go to tables into send articles and now I can see claude code has created this database a title key linked and when it was sent that means that future runs of this tool are now going to reference that file and update it continuously. So you can use this storage area within model to store database files or JSON files for example and those are really useful to have some persistence in between runs. Finally I got claw to just filter out older articles so they're not refetched. Removed prompting a bit and also got this to run twice a day instead of once. So for another workflow I have previously gotten Claude code to set up here. It's an SEO automation. So we want to get the top keywords for a bunch of different websites, combine all the reports together, and we want to do that for both Google and Bing. So I'll give you an example of how that runs. I'm just going to say run the SEO top keywords for the following websites. I'm going to just paste in a bunch of websites here. I'm going to press enter. And from here, Cloud Code is acting as the orchestrator. It's going to get the workflow. It's going to execute the code and then we should have the result from there. So now we can see the workflow fetched all those keywords across four domains outputed an Excel file that we can see here. And now when we look into this Excel file, we see we have a combined analysis of the top ranking pages across all of those sites. So in this case, I'm getting it to run a previously created Python script. So as you build up more and more workflows like this, you can get clawed code to run those workflows whenever you have something previously codified like this. Or you can just get it to do ad hoc tasks on the fly and use its own internal agent harness to do so. But similar to our news engine workflow I previously went through, getting Claude code to create this top SEO keywords workflow was ridiculously easy. I just prompted it in natural language to get the top ranking Google and Bing keywords from a list of websites using the data for SEO service and it just went and did it. It searched up and figured out the data for SEO API specs. It created a plan and I just asked it to build it and it just worked first time around. Again, that's not going to happen every single time, but AI coding tools like Claw Code are exceptionally strong at being able to perform tasks like understanding different API docs, at creating scripts to analyze and match data together, and then to create export files like this. And this is a great example of where I got Cloud Code to create a script that's able to pull data from an API and then process it in an exact way every single time. The beauty is there's no AI used at all throughout this process. So it can run almost instantly and it will run exactly the same every single time. So if it's possible to create these kind of deterministic workflows, then this is the best of both worlds. Get cloud to create them, verify the output, and then you can just use these locally or deploy them however way you want. For this kind of automation, it might make a lot of sense to just have a lot of different Python files that you can run locally. It has access to your local file system. You can mash lots of Excel files together, for example, and have a really efficient local workflow. There's a lot of talk of clawed co-work at the moment, which has access to your local file system, and you can use it as a co-pilot to just be way more efficient with your work on your local machine. You can do this at a whole other level using clawed code. But not only that, you could get clawed code to build specific internal apps that run directly on your machine. This is a Python app using the pi cute library. And in this I can select a file, process the data, then it outputs it in the correct format. And you can get way more complicated than this. So if you want to create an internal tool that runs on your computer, you don't need to go through the entire web application architecture process. You could just get clawed code to wrap your Python script in a nice front-end UI, get it to use a library like Pycute. Then you can have that tool launchable. So you just need to double click on it and it opens up like you can see here. Here are some ways that you can leverage your NAN knowledge and use them in combination with clawed code. Firstly, you can use NN as a safeguard. So you use NN as a controlled endpoint for whatever you need to do. For example, if you don't want to give your AI agent free reign to just send emails, you could create a web hook within NATN and then you add in a node, go to human in the loop, and then you can add in any of these approval methods and then from there you filter to make sure that that approval rating is true and only send the message if it is true. So this works for human in the loop or really any type of workflow where you want to be ultra controlled about exactly what your coding agent is going to be able to access. So this could be a production system for example or you could be sending a message with some of the information already pre-filled. Another good combo is using Nitn's built-in integrations. 10 already comes with over 500 integrations. If you've already set up connections to certain tools, you can just leverage those from within your AI workflows. So if you're already connected to some of these tools via nitn then you could just leverage that and then make those available to your AI coding agent via web hooks. And finally something quite different is using cloud code and nitn's mcp to create and manage and debug workflows programmatically from within cla code. I had pretty mixed opinions when I started using this but as I've played around more and more I found it quite useful for certain operations. So if you want to make updates to existing workflows or if you want to debug certain parts, it can be very very good. But it certainly comes with limitations. You do not get anywhere close to the amount of power that you would get from using cloud code to create aic workflows in code directly. But the Nitncp is still quite a good option. If you want to get started building aic workflows, then check out the link to the free repo in the description below. And if you want to learn how to use clawed codes to build sophisticated agents grounded in your company data, then make sure to check out our Clawude code rag masterclass
👉 Get ALL of our systems and resources: https://www.theaiautomators.com/?utm_source=youtube&utm_medium=video&utm_campaign=tutorial&utm_content=cc-agentic-workflows 🔗 Get the FREE repo for this video: https://github.com/theaiautomators/agentic-workflows In this video, you get a practical, no-hype breakdown of agentic workflows and how tools like Claude Code can be used to build real automation in code, when they are worth using, and when tools like n8n are still the better choice. A free GitHub repo is linked in the description so you can follow along and build the same workflows yourself. What's covered: - What agentic workflows actually are and how they differ from visual automation tools - A simple workflow loop for coding agents: plan, build, test, and commit - How Claude Code can scaffold, build, test, and version-control Python workflows with minimal input - When agentic workflows are a good fit and where they introduce risk or complexity - A clear comparison of n8n vs agentic workflows, including strengths and weaknesses of each Deploying workflows to the cloud - Using Git for version control, rollback, and iteration - Adding persistence with a SQLite database stored in Modal to avoid repeat results Tooling combinations: - Using n8n as a safeguard layer for approvals and sensitive actions - Leveraging n8n’s built-in integrations alongside code-based workflows - Letting agentic workflows handle complex logic while n8n handles control and visibility If you want to start building agentic workflows, check out the free github repo in the description. Chapters: 0:00 - Overview 2:57 - n8n vs agentic workflows 6:00 - Building your first agentic workflow 13:34 - Deploying to Modal 17:47 - Database on Modal 19:17 - 2nd example 21:46 - Desktop UIs 22:31 - Combining n8n and agentic workflows