Being an Engineer

S7E3 Aaron Eden | How Engineers Can Use AI Today

Aaron Eden Season 7 Episode 3

Send us a text

Aaron Eden brings more than three decades of building, testing, and shipping practical innovation. At Intuit, he focuses on AI-driven process automation; partnering with product, operations, and analyst communities to eliminate manual toil and design customer-centric solutions at scale. His posts highlight ongoing hiring and growth around intelligent automation and a practitioner’s mindset toward measurable impact.

Before Intuit, Aaron co-founded Moves the Needle, where he helped Fortune-scale organizations adopt lean startup and design thinking behaviors. Through executive mentoring and enterprise programs, he guided leaders to shorten time-to-market and increase employee engagement while staying grounded in customer outcomes. He’s also held multiple roles inside Intuit’s broader innovation ecosystem, including Design for Delight leadership and talent initiatives aimed at spreading experimentation across the company. 

Outside of the enterprise, Aaron’s entrepreneurial streak shows up in community and advisory work. He co-leads the Artificial Intelligence Trailblazers meetup—an open community designed to make modern AI approachable—and frequently speaks on translating buzz into business results. He also mentors founders through Startup Tucson and participates in local panels like the University of Arizona’s “Technology for Good,” where he advocates for responsible, accessible AI.

If you’re an engineer or technical leader, you’ll appreciate Aaron’s bias toward running small, smart experiments, measuring what matters, and shipping value fast—principles he’s applied from customer care analytics to RPA/AI platforms. Expect a conversation rich with playbooks for automating high-variance processes, empowering analysts, and building an innovation culture that sticks.

 

LINKS:

Guest LinkedIn: https://www.linkedin.com/in/aaroneden/

Guest website: https://www.brainbridge.app/

Guest NPO: https://www.aitrailblazers.io/

 

Aaron Moncur, host

Download the Essential Guide to Designing Test Fixtures: https://pipelinemedialab.beehiiv.com/test-fixture

About Being An Engineer

The Being An Engineer podcast is a repository for industry knowledge and a tool through which engineers learn about and connect with relevant companies, technologies, people resources, and opportunities. We feature successful mechanical engineers and interview engineers who are passionate about their work and who made a great impact on the engineering community.

The Being An Engineer podcast is brought to you by Pipeline Design & Engineering. Pipeline partners with medical & other device engineering teams who need turnkey equipment such as cycle test machines, custom test fixtures, automation equipment, assembly jigs, inspection stations and more. You can find us on the web at www.teampipeline.us

Aaron Moncur:

Aaron, hello and welcome to the being an engineer podcast today. We're talking with another Aaron. Aaron Eden, an AI Process Automation leader at Intuit and longtime innovation catalyst across roles spanning enterprise startups and community building, Aaron has helped turn has helped teams turn messy, manual workflows into scalable, customer centric systems using robotic process automation, RPA AI and lean experimentation. He's coached executives launched programs that accelerate time to value and CO leads. Tucson's ai Trailblazers community where he brings people together to put AI to work in the real world, outside of all these things he's doing with AI and at Intuit, Aaron was also a keynote speaker at the recent PDX in person event in October last month. So I got to meet Aaron in person there, and he delivered an incredible talk, and I asked him if he would consider being on the podcast so we can share this content with even more people. So if you weren't at PDX, we were sad to miss you, and you still get to hear some of the content that was shared at the event. So Aaron, thank you so much for being with us today. I'm excited to be here. Thank you. So AI, is like all the rage these days. Obviously it's, it's popping up everywhere. How did you first get into AI and automation?

Aaron Eden:

I've actually, I'm a I'm a geek, through and through. So I started, I started programming when I was probably about 12, and I remember I actually ran a bulletin board system for a little while. And there was, there was a, there was these scientific papers, they would actually get passed around in these bulletin board systems. And I came across one that was on neural networks. And, you know, this was probably, I would guess this was like the mid 80s or something like that. And so it was very early days with with regard to all of these things. It totally intrigued me, but all of the math was totally over my head. I could hardly understand it, like it was, but it was intriguing. And so like from that point forward, every time something around that has popped up, I've sort of latched onto it and found interest in it. Still to this day, the math is totally over my head. But you know, when I had my last consulting company, it was an innovation consultancy, and one of the biggest problems that we had was I'd go to a conference and I'd meet somebody from General Electric, I'd meet somebody else from GE Aviation, someone else, from General, electric, something or other, and all those would be in my CRM system. And it's the same company and and as a like change management sort of approach, like we want to know who are the hearts and minds that we're touching inside organizations. So one of the first, one of my first sort of touches into this technology space was building, building a small NLP system to see that like those are actually the same company, or there's a high likelihood that those are the same company, and being able to sort of de duplicate our CRM system. And then more recently, when all of the large language models really started taking hold and those kinds of things, like, I got totally sucked in. Like it was, you know, watching videos on YouTube, non stop reading articles, reading scientific papers, this kind of stuff. And it was just kind of for me personally, it was this magical moment where all of these, all of these things, the technology, geekery, software development stuff that I had had in my background, I did a lot of data warehousing and business intelligence work, and then the sort of most recent parts of my career had been very much focused on, on the people side of things and change management and kind of all of this came together for me personally, where I'm like, oh, like, this is my this is the way that all of these things that I've learned over the years kind of come together. And what I'm seeing from all of this is, like the technology part of it is actually not all that difficult. It's really it's the people side of it that's really difficult, and so those skills in innovation and lean startup and design thinking and those kinds of things have come in very useful for being able to help people navigate this kind of crazy change.

Aaron Moncur:

I'm curious. It was, I don't know, three, four years ago when, when I really started being a. Aware of basically chat GPT. I think to say AI would be overstating it, but chat GPT was on my radar three, four years ago. I started using it very quickly. Understood the power behind it, especially for mostly what I do these days is like marketing and business development and sales around engineering, and I have used it a lot in those spaces, but that was me, and I wasn't really looking for AI or following the that part of the industry up until then, you clearly were watching all the YouTube videos and just soaking everything up before llms became or started to become mainstream. What? What was it like? Where did you see that as like an imminent future? Or was that community just kind of unsure what it was going to be, if anything, when it was going to be? How much uncertainty was there or was, like, was there a pretty clear trajectory, and and that you and that community were all kind of just waiting until the time came?

Unknown:

I don't know that it was a clear trajectory. I think, I think what happens? I mean, the fact of the matter is, like, as humans, we've been working on this technology for 100 years, right? It's, it's not, it's not new. What's what's new is some specific breakthroughs that have allowed us to do it at a significantly greater scale than we could before. Right? The in this case, specifically, the transformer architecture was sort of the breakthrough that led us to these large language models and the ability to do them at super high scale. But like NLP and those kinds of things have been happening for a very long time, just was not nearly at that at that scale or level of impact.

Aaron Moncur:

What does NLP stand for? Oh, sorry, yes,

Unknown:

neuralink, Neuro Linguistic Programming. Oh, okay, so natural language processing, depending on whether you're a doctor or an AI geek, that

Aaron Moncur:

Neuro Linguistic Programming. I've heard the term, but I've heard it in the context of, like, I don't know, like Tony Robbins seminars, which, it seems different than what you were talking about.

Unknown:

Sorry, I made a joke there, but it wasn't a good joke. Got it? It's it's overloaded. No, no, it's overloaded. It has too many neuro linguistic programming and natural, natural language processing. So, sorry, yeah. So, so natural language processing, you know, has been around for quite a while and, and, like, that's what I was fiddling around with when I was sharing that story about, you know, de duplicating business, business titles. So, so no, there's been a lot of people working on this for a very long time. I think there's a lot of what's referred to as, like, emergent properties, new capabilities that these things are able to like as we increase the scale, like, oh, wow, it can do that thing that I didn't realize that it would be able to do. So as the scale increases, a lot of those are surprises to everybody. It's like, scale it up and see what it can do this time around, right? And continues to get smarter. So, so, anyway, so, so, so, short answer to your to your question, is, really that, like, it's been around for a while, not new. Lots of there's surprises in what it's capable of, but not, I don't think anybody knows what the trajectory is of where it's actually going. Interesting.

Aaron Moncur:

Okay, all right, so let's, let's talk a little bit about how engineers can use AI what? What are a couple of ways that engineers can at least dip their toes into the AI pond? Yeah?

Unknown:

Oh, man, there's so there's so many opportunities, you know, the the immediate one that I'm sure everybody thinks about right off the bat is using it for engineering. So whether that's writing code or helping with with designs of things that you're that you're that you're implementing, helping explore possibilities. Because the way I like to describe it is like, you basically have, like, you have the Einstein of, like, every type of job on the planet in your pocket with these tools. And so, like, for me, as an example, I'm pretty decent with like, back end kind of software development, primarily around databases and data architecture and this kind of stuff. Like, that's an area of strength for me. I suck at front end kind of stuff. In fact, like, one of my good friends leads the React team for meta. It's constantly trying to get me to, like, to react, and it's been my nemesis for years. But like last year, I built my first react application, and actually, I can't take credit v zero wrote my first react application, and I've learned, learned through that, and now I've built a few applications with it. I'm still terrible at it, but in that example, I don't need to depend on somebody else for that, at least to get. It started, you still want somebody with some expertise in those areas, but like now as an engineer, there's actually this broader range of things that I can take on that maybe I couldn't have taken on before I can at least get it up and going. And like thinking about thinking about HR teams on most large organizations, they usually will talk about employees as, like, we're looking for a T shaped employee. So t would be like, on the top part of the T that this employee is able to go broad on a bunch of different topics, and that the base of the T is they can go deep into A into, like, a single area of subject, subject matter expertise. But what these AI tools actually enable each of us to be is sort of a unicorn employee where, like, I can go broad on a whole bunch of stuff, and I can actually go fairly deep on those things with the support of the AI, I do have to still have some expertise, like, I can't You can't vibe code your way into a production application without really knowing what you're doing, at least not at this point in time, maybe a week or two, that'll be possible. But like today, that's not possible. You still have to have depth, depth of expertise. But as these tools continue to get stronger, like we as engineers, I think are just going to have to continue to sort of take on more, more, my, my, my role inside of Intuit is on a process automation team. We've actually been experimenting with me playing both sides. So I'm both a business systems analyst, I'm going and gathering the requirements, engaging with the business users, understanding their current processes, documenting what we're going to go and automate, and I'm implementing it and delivering the automations, writing the code for it, you know, putting it on our production systems, etc. And so, you know, I think a lot of organizations are sort of experimenting with, like, what are the boundaries of what you know, what a single engineer can can do and should do, and what are the you know, we don't know where it's going. And so I think, you know, back to kind of where we started the conversation. It's important as engineers to lean into our curiosity and to go and explore new things, because they're changing so fast. And anybody that says that they know where it's going, I think, is BS, ing, you we can't predict the future. There's, there's some scenarios like, Yes, more stuff is going to get automated and we're going to have to do more work. There's, that's, that's sort of a given. How much, I don't know. Is it so much that we need universal basic income and that we don't have to have jobs? Or is it, you know, enough to where we just have, you know, a broader suite of things that we're taking on? Don't know?

Aaron Moncur:

Yeah, we had a similar experience here. You talked about, like, the going broad and deep on different things. We don't do a lot of software, at least not a lot of embedded software. We write PLC code all the time for our automated machines, but not a lot of, you know, like C sharp or that type of embedded software. And we had a project recently where we needed to do some of that. And historically, we probably would have just subbed that out to an electrical engineer, someone else. But this time, one of the engineers who knows a little bit of software himself, just opened, opened up chat GPT, and said, This is what I need to do. And there was the code, and he put it on, I think it was just an Arduino or something, but it worked great. It was perfect, and it was super fast. You know, it's like maybe a couple hours of work versus, you know, we probably would have paid a few $1,000 in the past to have that done. So this leads me to my next question, what are some of the specific tools that engineers should know about? You mentioned v zero. You also mentioned vibe coding, which is not a tools per se, but more like a genre. Maybe. Can you talk about that a little bit?

Unknown:

Yeah? Yeah. Thank you for thank you for calling those out too. By the way, please do anytime I'm using acronyms or strange terminology of or maybe not even strange, just that it needs to be defined. So, yeah, thank you. Yeah. So, so vibe coding. Vibe coding is kind of like a it's a slang way to reference writing computer code using AI tools or using AI assistance in a case where maybe like, like, in the example you just gave of one of your engineers that maybe doesn't have that experience themselves, they got a good vibe that the code that it came up with worked and it was functional, and you got it, got it to go. They don't necessarily have the deep expertise to dig into that, to know whether there's potential, you know, minor bugs or issues hiding in there, right, right? But I'm not trying to talk badly about them. I don't know them, but, but like, but like, using me as an example, right? When I built that react app, there's probably some garbage code in that react app that I built, that that someone that knows what they're doing would look at and go, like, no, that's bad practice, right? So. Like, that's, that's really the reference to vibe coding. It's like being able to write code in an area where maybe you don't have depth, depth of expertise, and so you have to kind of go on vibe and, you know, sort of a spectrum of people that are like, you know, on one hand, like, I've never written any code in my entire life, and I'm going to use this thing, and I'm going to build an application, and I can successfully build an application, and that would be sort of a completely vibe coded. So I'm realizing that I I forgotten your question, Aaron, I'm sorry

Aaron Moncur:

that was half of my question right there. The other half was, what are a couple of specific AI tools? Have you ever built a test fixture that didn't work well? Most engineers have usually because of hidden pitfalls you wouldn't know to look for if you don't design fixtures all day after watching this happen for years, we built a simple five step framework that gets fixtures right the first time, and we packaged it in a free guide called The Essential Guide to designing test fixtures if you want more accurate, repeatable data and fewer redesigns, grab the guide at PM, l, dot, engineer forward slash, test, dash fixture, get it, steal the framework and level up your fixture game.

Aaron Eden:

Yes, yeah. So specific, AI, tools. Oh, man, okay, so I'll listoff a few in in software engineering. But I know also from my research. You know, before speaking at the conference at PDX, a little while back, I found a handful of other ones as well that that normally I wouldn't touch on as a software engineer, but definitely exist or and are important from an engineering perspective. So on the software engineering side of things like today, Claude code and Claude code cursor. I don't know there's a there's a bunch of there's a bunch of AI coding either command line interfaces or tools that can plug right into your development environment that will basically help you write code in your favorite language. Personally, I'm a total Claude code addict. I love it. I think it's really well done, and I've got a bunch of pre built prompts and other things that help me sort of make sure that it's done at a certain level of quality and and those kinds of things. So Claude code, I've been this last week, I've been experimenting with Google's brand new anti gravity tool, which was they Acqui hired the windsurf team, which was kind of a competitor to cursor, and that was their first release under the Google, sort of the Google brand, but it leverages the new Google Gemini 3.0 model, which is really, really good. So I'm enjoying that so far as that's been interesting, because, like, you know, as a as a developer, engaging on the using Claude, it's a lot more sort of interaction. Like, Hey, Aaron, I'm about to do this thing. Like, you okay, if I do that, or, you know, whatever, you know, maybe we should do this thing, this other whereas, whereas anti gravity is kind of the opposite approach, which is, like, I give it something that I want it to go do, it goes and does the entire thing, and then comes back and kind of gives me a write up on what it did, right? So, and then it'll do some code review along the way. But, but, like, it's kind of completely different approaches. So I've been learning AWS, which is not, not something I had a lot of experience with, and like, Claude code has taught me all of, not all of, but much of the AWS infrastructure that I've needed for some of the projects I've been working on, you know, over the last year or so, because of that approach of, sort of, like explaining and teaching you along the way, whereas, like, I think if I had anti gravity at that point in time, I probably wouldn't have learned as many of those things, because it would have just taken it and done it. So not right or wrong. Just depends on, depends on what you need. So definitely those, like those, those IDE tools that can, they can put code right in there. They could actually review the entire code base rather than the sort of like go into chat, GPT, ask it to write something, copy it over, go back and forth. You can absolutely code that way. It's just a lot slower. And you know, for probably 20 bucks a month, you can have one of these other tools, and it's way faster. Some of the things that I found really interesting in preparation for the PDX Conference, which I have very little experience with, but I'm going to mention them anyway, because I know they're they're beneficial to your audience. One of the, one of the things that I came across was there were two, there were two really cool Nvidia tools there. There was one tool that was a simulation. Platform where, if I've, if I've designed something physical, and I want to simulate wind drag, or I want to simulate how it would handle specific types of liquids, where I want to simulate how it would handle certain heat level, whatever, like, all these kinds of simulations, effectively, like, the way I think about it as a software developer, like, that's all your unit tests. Like it would basically do all of your tests, like a bunch of simulated testing to make sure this design you came up with was going to be really solid. And they've built this entire sort of Omniverse simulation platform for being able to sort of simulate any, any kind of physics situation that you might, you might have to give you kind of an early read on that. So, so some of those Nvidia tools were really super fascinating,

Aaron Moncur:

and they really disruptive, like the incumbent right now in that area, the simulation area is probably ANSYS, and it's, you know, it's like 10, $20,000 software, plus all the training that is needed to really set up and understand the results of these simulations. So if an AI tool can make that process much simpler, man, that could be really disruptive in the industry.

Unknown:

Yeah, there's a there's a lot happening in that space, too. The other interesting part of it is that, like on the robotics path, the same kind of thing of simulations, where you can put, you know, 1000 robots in 1000 virtual robots into the same simulation at the same time, and then have them using reinforcement learning, having them learn what is the way to be able to navigate this environment, and that environment mimics the real world. So now, rather than running, having to build a physical robot and run that over and over in the real world, you can now get, you know, millions of simulations. There was a, there was a really interesting experiment that was done with one of the like robotic dogs to like spot dogs, right? The quadruped kind of deal. And they put the dog in a simulation where it was on top of one of those like bouncy exercise balls, and they they totally need to stay balanced on the exercise ball. Through these millions of simulations, it learned to balance itself on the exercise ball and, and this video that I saw was literally the quadruped, like, like, those balance, like, stable. It's like, kind of scary, actually, what's

Aaron Moncur:

possible, but almost human, right? Yeah, totally.

Unknown:

But given enough like, it was able to have millions of experiences to now be able to have that level of balance, right? So, so anyway, so these, like simulations and virtual environments and digital twins and, like all those kinds of things, are really powerful. And then the second one that I that I found, I think, was, I think was one of the Adobe tools that was that was to assist with the actual design of a specific component. And what I found was really interesting about that was all of these generative tools of like, you know, go into Gemini and ask it to generate an image for you, or go into suno and ask it to generate a song for you. You write the prompt, and based on the quality of that prompt, you get a generative result from what it's created, a song or a poem or an image or whatever, right? And you learn very quickly, like, oh, I left that detail out of the prompt. Like, that's why I got that weird thing, right? But this, this generative design tool, actually sort of took the opposite approach, which is that what you do is you go into the design tool and you basically map out boundaries for what it's not allowed to touch and so, so in this case, the the like, simple example was, like a, you know, a wheel on some piece of furniture or something. They were, they were trying to redesign this, this wheel. So they went in and they basically blocked out all of the constraints, all the things that the AI is not allowed to touch. You can, you can mess with anything inside of here, but like these things you do not touch, and then they allow the generative tool to basically come up with a whole bunch of different possibilities of what might be a really great design for this specific set of constraints. And then it would give back the list of all of the possibilities with sort of a prioritized list, so the human can now choose, like, which ones do I want to experiment with. And now thinking about the simulations I was talking about a minute ago. Now, you could take that thing, put that design into a simulation and see whether it's going to meet your your requirements, so your ability to iterate should increase very significantly. Because you can say, here are the top six designs that I as the as the engineer for this thing, believe are most likely to be successful based on. Everything that I know, and now I can go put those into six simultaneous simulations and come back with some data that says, like, okay, yeah, your go was right, that that one on top was probably the best performing. Let's go ahead and actually, you know, 3d print a prototype or whatever kind of move. Move on to the next step.

Aaron Moncur:

Yeah. Short break here. I just want to share with everyone that hopefully you have the time to listen to the full episodes of being an engineer podcast. However, we understand that life is busy, and if you want just a concise summary of the episodes, go ahead and sign up for our newsletter. Go to the wave, dot engineer, and you'll see on the side, on the right side, there's a little banner sign up for the the PML pipeline Media Lab newsletter. So there's other information that we put in there as well. For example, details about, you know, the upcoming PDX or PDX webinars or articles that get released on on the wave. And then, of course, a summary of each week's podcast. So if you want just a concise breakdown, summary of all the things that we're doing on the media side here at Pipeline, sign up for the newsletter, and you will get, you'll get, I think it's bi weekly. Every other week, you'll get a an email with that summary. Okay, back to our conversation here with with Aaron Eden, let's go back to the using AI to create to write software code. Just for a minute. You mentioned that there are some prompts that you have created that that help you ensure you're getting quality code, or maybe there's some guardrails. Are there any like best practice, best practices for prompt engineering that you can share with the audience, yeah,

Unknown:

the the short version of the whole thing is that it's consistent with what what you'd be taught around, like the software development life cycle, and the things that you should do as a good engineer, right? Which is, which is that I should start by figuring out what my what my spec is, what what is the high level architecture of this thing that I'm going to build, and actually put together a detailed plan for what I'm going to build before I start building then, you know, in many camps, it's sort of a test driven development, right? Write a test, then run that test, it's going to fail because you don't have the code written yet. You write the code, the test should now pass. You're going to kind of repeat that test driven process that works. All of that works really well with the AI. It doesn't, in most cases, it doesn't naturally follow that. And so the prompts really around, are around getting it to sort of follow that. So as an example I have, I have one prompt that I can execute in Claude code. That is my planning prompt, where I where I tell it like, I've got a whole set of like, okay, great. You're going to ask the user questions to clarify the requirements for this. Then you're going to propose what the plan might look like to implement this thing. And you're going to go back and forth with the user until they agree on the plan. Then you're going to write the plan into a Plans folder, in a markdown file, where now I can go back and I can edit the plan and go, like, yeah, that part's not quite right, whatever, whatever, whatever, right? So, like, that's one prompt which has come up with a plan, then I've got a second one that is something around the effect of like you. You are a world class architect, software architect. I want you to review this plan, for scalability, for security, for these other things, and I want you to critique the plan, right? So then now a second one comes through and starts critiquing the plan. Then I've got a handful of other ones, kind of depending on the situation or the type of thing that I'm trying to build. But, but, like, there's a lot around coming up with a really solid plan and making sure that that is, that is, that is well done in that example, using the sort of cutting edge models, the the Gemini three Pro, or the Claude 4.5 sonnet, or the Claude 4.5 Opus. Those like really big, expensive bottles. They work really well for that, for those planning steps and having them go and do internet research on the libraries to use, or the best approach, or, or, or gotchas that come up from people trying to use these same systems, like capturing some of those things up front. Once you've got a really solid plan, in a lot of cases, you can actually use one of the less expensive models to execute the plan. They'll, they'll follow the steps really well, but, but they won't have the the size and the scale to or the creativity kind of stuff to, to kind of look at the bigger picture, right? So, anyway, so, so, so those prompts are, really, are all around, like the planning that I've got another set of prompts around the execution, another set of prompts around, you know, running, running the linting, running, running the the like bandit security tests running, making sure unit tests pass. Like. All these other kinds of things around, sort of checking the quality of the code. So it's really, like, it's in the way that I've gotten to that point is like, you ask it to do something, and you see it makes a mistake, and you go, like, okay, let's change, let's change that prompt to not so, like, we've got a command in there. So like, Okay, we need to do this other thing, or, or maybe I got a whole other prompt of, like, All right, let's check the security make sure that that kind of thing doesn't slip through. But you're kind of just building it up over time. So I would say, like, if you're just getting started with this stuff, don't worry about all that. Like, take a first stab at it and say, like, okay, great. Let's go grab a planning prompt that already exists out on the internet. There's lots of people that are like posting these up on GitHub and other places you can find them for free, not a big deal, but like, go grab somebody else's props for an STLC software development life cycle, find somebody else's as a starting point and go play around with them, or try without any prompts, and see where the AI screws up, and start writing your prompts around that. So lots of ways, lots of ways to get at that.

Aaron Moncur:

I've even noticed that I can tell I mostly just use chat GPT. I haven't really explored other llms yet, but in chat GPT, I can tell it to help me write a prompt for whatever this thing is, and it 100% of the time gives me a prompt that is way more detailed than what I would have written if I just, you know, spent a couple of minutes writing it myself, so it can even help you write the prompt. Is my point, which has been really useful going back to specific tools. There was another one that you brought up at PDX that we've actually started exploring and are doing some testing with and is super promising. It's called Lydia. Lydia. I think lydia.ai. Exactly. Thank you, Lindy, yeah. Can you talk a little bit about that one?

Unknown:

Absolutely, yeah. Lindy is one of my favorites. Shout out to shout out to flow over at Lindy, if you see, this is the he's a buddy of why. He lives out in San Francisco and X early, he was at Uber in some of the early stages of starting Uber. Anyway, yeah, so Lindy. Lindy is a no code automation, AI based automation tool. So the way that, I'm not sure if they're describing it this way anymore, but the way they described it at the beginning of of Lindy was, if, if chat, GPT and Zapier had a baby, this is what would it this is what would have come out.

Aaron Moncur:

So that's such a good way to describe it.

Unknown:

So, so the the, the what makes it really magical. So it's, you know, it's a workflow designer tool. You put a box here that says, do this thing. You put this other box in another spot, do this thing, put another condition over here, etc, connect the dots together, and now you've got a workflow. What makes it really magical? If you think about like Zapier or these other tools in any of these designer tools that you probably have engaged with over the years, you get, like, a design panel in the middle and on the right hand side you get a configuration panel that's like, configure each of the boxes and tell it what, what the variables, what the settings are, et cetera, et cetera. What makes Lindy really magical is that all of those boxes on the right hand side have an automatic property where you can say, just let the AI choose what to put in this box. Or I can say, let's, let's have the AI choose, but I'm going to give it a prompt for just this box to sort of guide it for how to handle this filling in this box. Or if you want to pick it manually, you can pick it manually too. So behind the scenes, it's basically a big AI agents, all being run by the large language model says, like, oh, I pulled this data in from Google Drive. I've got this Google Doc that I want to, like, I don't know what's a good example. I want to, I want to more efficiently keep my project status updates current in JIRA. And so every time, every time I have a meeting, I'm gonna, like, fire off an email to a special email box that's got, like, that's got some of my notes in it, and it's gonna get attached to my project. So whenever my boss asked me for a project update, I can just pull it right out of there. So in that example, I have Gmail connected in as a trigger into Lindy. That's like checking my gmails. The AI can read the content of that email to say, like, Oh, that's a project status update. It can tell based on, you know, reading the words. And then it can go, and, you know, put that data into JIRA or whatever the case may be, and you don't it's all, it's all an AI agent that you can basically spin up in, you know, minutes. So, very powerful platform. The last thing I'll say about it is that, like what you usually end up with with those kinds of tools is, like connecting to this system that I need. But not this other system that I need, but they've got like 6000 connections or something like this, so it's pretty, pretty well covered as far as, as far as those

Aaron Moncur:

things go, pretty well connected app, yeah, the use case that we're experimenting with right now is more of like a sales based use case, but where we have a product called Easy Motion, it's a simple way for engineering teams to automate things motion based and sensors and things like that. And we're looking for customers for this product, right? And so we're building this, this Lindy workflow where we can tell it, okay, we've got this product, easy motion. Here's what it does and and then it'll go off and start looking for companies out there that that might have need for this product, that might find it beneficial to them, right? And we'll give it some context, right? So it understands what kind of a business, what kind of a company would would actually use easy motion. So it'll go off, it'll find the companies, it'll identify some of the engineering leaders there, and it'll go find their contact info. And then it'll actually, like, create an email and send it out. And of course, we have, like, some checks in there, because we don't want it to go off the rails. And, you know, start doing weird things, but pretty amazing that, like a tool, can do all that without having to code anything yourself. And then, as you were talking, I thought of another potential use case for this. We build automated machines, and we always deliver a user manual with them, and the user manuals, they're just a pain in the butt to write like nobody likes doing that documentation. They take a long time. They're tedious. And I wonder if there's a workflow that we could create where we kind of feed that workflow that AI information about the machine, like during the course of development. And so by the end of development, it kind of it knows pretty well what this thing does, and how it works. And this that we could just push a button, say, okay, create the user manual for this machine. And it's probably not going to be perfect, but maybe it gets us, you know, 70 80%

Unknown:

of the way there absolutely no I think that's a that's a great use case. And and in that example, I would also like putting my putting my lean startup hat on for a moment, like, I would also question whether there's anybody actually reading that manual. And so, like I would, I would look at, like, is there a different form to deliver that knowledge to the person in right Could you, could you have a, I don't know that chat is the right interface, but could you have a chat bot that your end users are actually chatting with the code, or they're actually chatting with the design specs, or they're just, they're chatting with the artifacts that you need to create as an engineer, not the other stuff. And can you get them a bunch of their questions answered from something like that, like, is that intermediate document even necessary? Maybe it is, but

Aaron Moncur:

that's a really interesting yeah, yeah, just deliver a AI chat bot that a user can actually ask questions to that. But that's a really neat idea. Okay, yeah, let's talk about trust and AI. AI, obviously super powerful. It's increasing our productivity all over the place, but sometimes it gets things wrong. How should we be thinking about our trust of what the AI, these llms Tell us, yeah,

Unknown:

I You shouldn't trust anything they tell you, okay, like, like, seriously, like, thinking about thinking about code that I'm having Claude code, right? I'd have to check all of it. I need to go through I need to review the code. I need to make sure that it didn't put some weird a great example, I had a very simple, a simple automation that I was working on earlier today that that that basically allows, allows my my team at Intuit to interact with this chat bot and basically do all of their like, JIRA stuff. So, like, they can go, like, oh, move that story and day in progress. Or, Hey, I'm done with that, or add a note to it, or whatever, right, don't need to go play in JIRA all the time. Can just, like, ask the chat bot to go do it for you. Was making a modification to that, to that AI agent and its connection to JIRA, it was running into an issue where there was one field that was it was messing up. I couldn't update that field for some reason, so I'm trying to fix a bug right in the process of doing that, it ran into the AI tool, ran into an authentication issue. My personal token for JIRA wasn't working as it should have, and so to troubleshoot it, it thought maybe the token was not being sent to JIRA properly, so it added some workaround code to get into the. That? Well, once I caught that, I didn't see when it added the workaround code. And once I, once I caught what was happening, I'm like, oh my, my key expired. I just need to give it a new key and it'll be okay. So we got through the whole thing, got the bug fixed, etc. I checked, checked the code in, and one of the other developers that was reviewing my PR, like, hey, what? Why do you have this sort of double authentication thing? Like, you're passing the authentication token twice. Is that necessary? Like, which one is it actually using? Went back and looked, I'm like, oh, Nikki, actually, it just made that other that was a workaround. It was testing to try to figure it out, right? And so, like, even in that example, or something small, if you're not checking it, you're not like, in that case, the right thing happened, which is code review caught it. I should have caught it, but, but it also got caught in code review. And so like, you can't you can't trust it. It makes mistakes all over the place. It'll come up with things that that aren't possible, that that. And so like in software development, like the tests are your friends having the tests in place, because then when it goes and changes something weird and one of your test breaks, you know what happened there, but you have to do code reviews of it the whole time. Same thing. If you're having it write documents for you, or you're having it automatically go and find prospects on the internet to go after, or you're or you're having it right emails on your behalf. Like it is really important to make sure that you're designing these processes with a human in the loop and that and that you're verifying things as they come through, it can get you 70 to 80% of the way, but like, but don't trust that it's getting you 100% just assume that it's not, it's best, it's best to do that maybe next year or the year after. Maybe those things won't be necessary, but at this point in time, you have to really just, like, just assume that it's, it's, um, it's like a, it's like a super, super smart, like, right out of school kind of intern, like, it knows, it knows everything, but it just doesn't have a lot of like the school of hard knocks, you know.

Aaron Moncur:

And to be fair, humans aren't right all the time either right, like humans make mistakes, humans provide false information. So it's not like chat GPT or AI, these llms have some monopoly on being wrong on occasion. Well said, To what extent do you think that llms will get to the point where they I don't know the definition of wrong, even could, could come into play in this answer, but they'll get to the point where they're right at least as often As humans like they're as or more trustworthy than a human source.

Unknown:

So if you look at, if you look at the benchmarks, so, so there's a each time a large language model is released, there's a whole slew of benchmarks that they're basically run against, right? So the as an example, there's one called Swee bench software engineer bench benchmark. It's got a whole bunch of, like, different software engineering challenges. And you basically throw a challenge at the model, and you see, can it solve this? Can you solve it correctly? Right? They've got, they've got ones around math and engineering and, like, there's just, you know, benchmarks galore. Yeah. Here. In one of my workshops recently, I posted this chart that basically shows how these, how all of these large language models are progressing against all of these different benchmarks, and they're all going up into the right, which is the like, nice. So like, at like, the 80 to 90% level is sort of above, human average around these things, and, like, very quickly they're going up and above that level. So there's that part of the equation, which is, like, they're all getting better. And so it's likely that some point in time soon that's the case, but, but it's sort of this, like, it's like, this jagged kind of a thing where it's, like, it's super knowledgeable in an area, but then it'll make a little mistake right next to it, right? And so it's not even, it's not like, it's, it's, you know, it's, it's amazing in all of science, no, it's really good in this one area. But there's other little weird things over here where it's terrible at them, and, and like, there's, there's just so many different possibilities. There's no way for us to know where those edges really are, so I don't know. I think, I think they're, they're absolutely going to continue to get better. The other thing that's happening that I think a lot of people sort of get hung up on the benchmarks and that kind of stuff is that, like, what's also happening is all of the tooling that we're building around them is making them significantly more capable. So as an example, using using GPT 5.1 like the model itself, not necessarily the the chat. GPT user interface. But like using that model, compared to like Google, Gemini three Pro, just the model, the capability of the models is significantly increased when you wrap cursor or Claude code or these other things around it. So when you create kind of this harness around it that says, like, no, here for this task. Here's how you're going to be successful with this task, it becomes significantly better to the point where, like on their own, generally, the models, the models are terrible at mass, but they can write Python code like crazy, and they can solve almost any mathematical problem with Python code. And so you give them that tool and like, off to the races, right? So, so, like, yet, I think there's natural limitations in where you can go with the large language models, but, but I don't know that we know where those boundaries are, and we also don't know how far the tooling around them can take us. And so, like, in a lot of ways, it's better than many humans. Like, it may not be better than me as a software developer, but it's better than me in a whole bunch of other areas, like so, like DESTINY Don't, don't get, don't get wrapped around the axle, around like where it's going, or what will it be, artificial general intelligence or or will it, you know, is it? Is it better than humans? I think, I think the the key takeaway is it's better than us in a lot of different ways today, and so like learn, learn how to use it, learn how to, how to be more productive than than your colleagues, and, and, and we'll see where it takes us. Do you know is the rate of improvement of these AIs increasing, or is it kind of,

Aaron Moncur:

what's the stagnant gives the wrong connotation, kind of, yeah, yeah. Or is it started to plateau?

Unknown:

Yeah, I've seen debates in both directions. I think it's like, it's interesting. Because, like, everybody was was debating this whole like, oh, like, the model, the models aren't getting significantly better. Like, last year and then Gemini came out a couple of weeks ago, and is now, like, off the charts compared to a bunch What a bunch of the other ones have been able to do. But I don't know, I like, I don't know how much of that is their internal infrastructure and internal magic of stuff they've wrapped around the model, versus, like, what the model itself is capable of. But the other really interesting thing that's happening, though, is like

Aaron Moncur:

in that in that debate

Unknown:

there, the debate is primarily about large language models that specific, like Transformers, with specifically around language, and what we're able to do with those things, but we have generative models around music and general models around graphics. Generative model around 3d worlds. Generative model around 3d design, all these different models that are completely different forms. And what a lot of organizations like Google and these others have figured out like, if you go into the Gemini web app and you interact with it. It's using its image generation model. It's using its code generation model. It's using so we're starting to create these like models that are very surgically focused on specific topics or specific capabilities, and then using the large language model as kind of a router to say, like, Oh, that's a they need a picture. I'm going to send that to the picture model. I don't need to do that, right? And so, I don't know, man, like, I think the combination of the of the stuff wrapping around it, plus the the specialized models, like, I think, I think we're going to be seeing some pretty significant acceleration for, for the short, you know, the next few years and and who knows whether you know whether Terminator is born or not

Aaron Moncur:

Skynet is coming? Yeah. Along those same lines, what do you have any guesses or expectations for what will happen in the next, like 12 to 18 months around AI, what should we be anticipating?

Unknown:

I think you should. I think you should, at a minimum, expect that the same patterns that have already been established continue, which is that the models, the models continue to get better, the models continue to get faster, the models continue to get cheaper. Like those things continue to happen. And there's so many companies going after it, and so many scientific science, scientists that are doing exploration and research in the space that, like those things are likely to, likely to continue to happen. And so, like, I think the key, I think the key is. Like, we need to be constantly learning. We need to be open minded and exploring new things. And we need to be focused on, like, helping each other through this. Because it's, it's it's not, you know, it's not, it's not easy. It's like, as a software engineer, when, when, when Claude, when Claude 4.0 came out, I had that feeling of, like, oh, like, am I going to be needed anymore? Like, this thing is really good, wow. And so, like, everybody's feeling it in different ways. You know, there was, there was one comment that I wanted to make before, when we were talking about Lindy that, that I didn't, that I think now might be a good time for is that, you know, one of the things that we're doing with the AI trailblazers, a nonprofit that my wife and I started, the purpose is really to democratize access to all of these different AI tools, and, and, and, like, if you look at the Traditional technology adoption curve, like people that are not college educated or that maybe are not as well off or whatever, tend to get access to technology later than people that are more affluent and so and so. We've got an apprenticeship program that we've created where we basically take people that are, you know, in poverty or in difficult financial situations. We teach them how to build AI based workflow automations, in this case, using Lindy, teach them how to build AI workflow automations and then help them get higher paying, higher paying work as a as a result of that, and we've been able to do that successfully, meaning that like we've been able to take people that have very basic computer skills, capable of surfing the web, capable of sending some emails, that kind of thing, and get them the point where they're building AI based workflow automations, solving some of those same challenges that you were talking about, about, like researching contacts and going and doing, you know, sales kind of stuff, right? And so, like, I raised that story. Because I think, I think that sort of exemplifies where all of us as engineers need to be like we need to be trying new stuff. We need to be learning, we need to be exploring. Because these things are changing really, really quickly. We don't know where it's going to take us, but, but we'll definitely get there better if we, if we do it together.

Aaron Moncur:

That's so awesome that you're doing that. Thank you for doing that. That's what a wonderful, worthwhile initiative that is. All right. Well, Aaron, this has been great. Thank you for sharing some of your knowledge and wisdom with us. How can people get in touch with you? I'm easy. I'm easy to find. I'm I'm on LinkedIn. My full name, Aaron Eden, the other Aaron Edens in the world are angry because I I get Aaron Eden on basically every social network. So you can find me that way, on on Facebook or on LinkedIn, or on Twitter or Instagram or whatever. So pretty easy to find there's there's one other Aaron eating that's in a similar space, who lives out in San Francisco, but, well, but, but, but, but, I'm pretty easy to attract him. So please do Yes. Like, I love this stuff. I love helping others, and I'd love seeing what, what others are doing with with these technologies, and how you're changing the world. So like, let me know. Let me know how I can help or or, or share your success stories as well. I'd love to hear those things Amazing. Aaron, thank you so much.

Aaron Eden:

Thanks for having me.

Aaron Moncur:

I'm Aaron Moncur, founder of pipeline design and engineering. If you liked what you heard today, please share the episode to learn how your team can leverage our team's expertise developing advanced manufacturing processes, automated machines and custom fixtures, complemented with product design and R and D services. Visit us at Team pipeline.us. To join a vibrant community of engineers online. Visit the wave. Dot engineer, thank you for listening. You.