Aviatrix Blog

To Drive in a Nail, You Need a Hammer: How Aviatrix is Using AI Internally

Learn how Aviatrix is using AI internally to promote efficiency, communication, and productivity. This content was not generated by AI.

AI has moved fast, from sci-fi dream to “everyone has a chatbot” in what feels like five minutes. But for most companies, that hype hasn’t translated into real productivity. At Aviatrix, we’ve taken a different approach. We’ve been focused on using AI internally in a way that’s grounded, tactical, and, most importantly, useful. Here are some lessons we’ve learned along the way.

 

Beginning with AI: Finding the Right Tool for the Right Job

I first began working on AI initiatives at Aviatrix a year and a half ago. When we began building AI tools in-house, we found something very quickly: AI and its possibilities are too broad to use as a starting point.

To give an analogy, telling your employees to “use AI to do your job better” is the equivalent of finding someone who wants to drive in a nail, taking them to a hardware store and telling them, “everything you need is here.” The person would look in bewilderment at rows upon rows, shelves up on shelves of hammers, drills, nails, screws, washers, bolts, wrenches, saws, and other equipment, many of which could help – but without more specific guidance, they would have no idea where to start. No one is saying this is the best way to use AI.

The early results of this effort were messy. We tried to present multiple different ways to use AI, but our employees would not know what to do with it and how to make it something useful.

To use AI correctly, you need to think like a product manager or startup entrepreneur. You need to ask, “what is the specific problem we need to solve?” and super-focus on solving that one small thing.

Here’s a few specific problems we addressed right away:

  • Creating a Support chatbot – We started with the question: “how can we streamline and simplify the support process for our customers?” One way was creating a support chatboat that pops up when someone opens a ticket. This chatbot acts as a research assistant for the customer, guiding them to look up steps and technical documentation. The chatbot offers the customer a customized search for resources that can help them and means that we save Support staff for more complicated issues.
  • Building a platform for Support teams – This was one of my favorite initiatives. Our Support teams often have 3-8 systems to log into to mitigate customer issues, each of which is great at one specific type of issue. Unfortunately, it wasn’t always clear which tool would be needed, so Support would have to move from one tool to another manually, resubmitting all the same info every single time. This process would extend a 30 minute issue to something longer than an hour. We asked, “how can we protect our Support team from having toy reenter data manually so many times?” We then designed an AI solution that lets Support submit the same information just once to 6 of these tools and switch between them live.

 

With each issue, we’re building on our theme: identify a clear, simple problem that AI can solve, then create a solution. To knock in a nail, you need a hammer.

 

Teaching People to Use AI

Using weekly office hours, a main hub for AI tools and questions, and 1:1 trainings, we’re slowly teaching our people how to tackle problems with AI on their own. Here are a few challenges we’ve worked through.

 

A Simple Introduction to AI

Our first step is just making AI available to people in a safe, secure manner. We started with Microsoft Copilot, which is great at doing internal doc search and discovery, and linked in other apps and programs we typically use. This AI system created a simple, personalized large language model (LLM) you feed data into that you can query against, just to get people used to using AI.

 

AI is not Google Search: Training People to Create Prompts

One immediate issue is teaching them to create and refine prompts. I’ve worked with mainstream GenAI for 2 years and I think network engineers will be great at this – network engineers have always treated networks like they’re alive.

Others tend to treat an AI solution like Google Search. We’ve trained our brains to use Google not in natural language, but in concise, staccato verbiage to get an SEO match – for example, “best coding platforms” or “top ten principles of resiliency.” But a GenAI system is made for a natural language query: you need to ask questions like you would to a person.

 

Google Search (Optimized)Generative AI Prompt (Optimized)
Uses short, keyword-based phrasesUses full, detailed questions or instructions
Focuses on finding existing pages or documentsFocuses on creating an answer or synthesizing new information
Example: “best hiking trails Colorado summer easy moderate”Example: “Can you recommend 5 easy to moderate hiking trails in Colorado that are best for summer, and explain why each one is a good choice?”
Relies on skimming multiple sources yourselfRelies on AI summarizing, explaining, or tailoring the answer for you
Often expects the user to refine search terms if the first results aren’t perfectEncourages giving full context up front to get a custom, nuanced response
Syntax matters: use fewer words, operators (quotes, minus signs, etc.)Syntax matters: use more clarifying words, constraints, tone, and intended output format

 

Security and Privacy: Best Practices for Feeding Data into AI

When it comes to using AI, everyone is struggling with data integrity (how do you make sure you’re entering correct information into AI?) and data privacy (how can you use AI without violating people’s privacy?). We dealt with this issue internally when we gave everyone the ability to upload files into LLMs. Thankfully, I’d inserted a check system that looks for PII data. The check system flagged 90 out of 100 files for PII (personally identifiable information) because they had people’s names in them. These types of challenges take fine-tuning and habit-forming before they feel seamless – for example, getting in the habit of scrubbing files of people’s names and any sensitive data before uploading them.

 

AI as an Asset, Not a Stress Inducer

I was recently explaining to someone how AI can save them time. “That’s disingenuous,” the person responded. “They’re just going to make us work more.”

It’s a legitimate question: how can we use AI to equip our people better – without making unreasonable demands? As AI drives efficiency, is there a higher expectation of output?

One way is to build clear and reasonable expectations based on what tasks people are using AI to do or to do more efficiently.

 

AI as an Arms Race

AI is the new arms race; the first organization to get to agentic AI wins. Every major cloud provider and plenty of others are investing in this race. For example, Google announced with Gemini 2.5 that they’re giving people the ability to use AI to build workflows – workflows that can take actions for people. Part of our work in equipping Aviatrix employees with AI is keeping an eye on both the race and our people: in other words, giving busy employees a chance to grow and improve using AI with the models they can be most comfortable with.

 

Most of the problems around AI involve leaders who have a vision but no clear, tactical idea of how to implement AI to benefit their employees, customers, and partners. By starting small, defining clear, simple problems, and training people in basic best practices, we’re guiding people to identify each problem (“I need to drive in a nail”) and handing them a hammer.

 

Curious about how organizations are integrating AI into their products? Explore the four milestones for distinguishing AI-washing from actionable intelligence in cloud network security.