Make Generative AI Your Business Data Advantage

Author: Todd Slind, Vice President of Technology
Connect to systems with geospatial tools and deliver chat and virtual assistant experiences for your employees and customers

The AI evolution continues. Are you ready?

With the increasing use of services like ChatGPT, more businesses than ever are looking to tap into generative AI, or large language models (LLMs), as a solution for getting the right information and answers to anyone in the organization. Wouldn’t it be great to ask a computer for an answer instead of figuring out where it resides, how to ask, who to ask, and how to get your hand on it? You create value by making your enterprise data available at your beck and call. At the same time, stories of data breaches and failed AI experiments trigger concerns about accuracy, security, and usefulness. And having the technical expertise and strategic planning can be a challenge.

Luckily, there are patterns of approach that alleviate these concerns and provide a framework for success, allowing businesses to harness the power of generative AI to accelerate the dissemination of knowledge to all corners of the office. Companies across industries can dissolve data stovepipes and departmental barriers. They can unlock insights buried deep in their enterprise data using a chat experience. Instead of making requests or carrying out multi-step processes to get answers, users can quickly get the information they need to solve problems and carry out tasks, increasing operational efficiency, productivity, and effectiveness along the way.

Generative AI makes companies jumpy

Why are more businesses than ever looking to tap into conversational AI or LLMs in the first place?

Let’s start with the media. Across every channel imaginable—from news outlets to social to entertainment—it’s impossible not to consume stories or listen to chatter around experiences like ChatGPT. And if everyone is talking about it, that means senior leaders at businesses are too—and how they might harness this emerging technology to develop business, whatever the business may be.

Many businesses, individuals, or departments independently experiment with or begin small pilot projects. Organizations recognize that they need a plan and policies to avoid a patchwork, non-strategic approach. They must consider the advantages and threats and how they can remove barriers and stove-piped data access to facilitate seamless information retrieval in a safe, secure, and strategic approach.

However, the challenges of implementing an internal or hybrid LLM are significant. The biggest challenge involves the technical aspects of preparing data. Data typically aren’t structured well enough to be useful for LLM or come in different formats. If they reside in a database, they may have different schemas that an LLM isn’t trained on and thus doesn’t know how to read. In addition, data may be locked into proprietary technology, which makes it obscured from being accessible to an LLM. Finally, there is often a general lack of metadata, so data sources are poorly documented, making it difficult to determine what’s authoritative.

To a lesser extent, the enormity of deploying the technology and its potential can often be intimidating because of the risk factors involved—confidential data getting leaked out of the organization. In addition, companies may find a lack of in-house expertise the biggest hurdle to building internal applications and understanding how to prepare data to be used by new technology.

Generative AI starts with the right design

So, how does an organization begin its generative AI journey? The temptation for many businesses could be to employ simple generative AI using a 3rd party service by simply visiting a website to gain a general answer to a fundamental question.

But if you need to decide about proprietary data—to protect trade secrets or competitive tradecraft, confidential customer records, or regulated information secure and private, you need your own LLM. You have options—there are different levels of maturity and sophistication. Some LLMs are trained using tens of thousands of parameters for a narrow scope of interest. Some are trained with billions of parameters, for example, to generate images and consider the context of a question.

Organizations starting to use generative AI responsibly want to leverage pre-trained commercially-available models. Still, they don’t want to train that LLM on their data. They want it both ways: implement a hybrid architecture that gives them access to the valuable service they don’t have to create and lets them control their data entirely.

Ultimately, companies want to make the most valuable information available to anyone who needs it, whenever they need it, wherever they reside. They want the most effective means available to take advantage of what’s known as a hierarchy of value in the data sphere, consisting of:

  1. Raw data (unstructured, unfiltered, unformatted)
  2. Information (data processed into usable format)
  3. Knowledge (add context for functional uses)
  4. Wisdom (the insights gleaned from actual application)

The right architecture and setting standards for question formats involves segregating data that can’t be exposed to LLM from the types of data that you send and receive from LLM.

Employing the retrieval augmented generation (RAG) pattern can keep those things separate. Preparing data for generative AI and adapting an LLM for your use cases involves knowing the domain of inquiry or application.

You need to design and understand the domain of inquiry—what do you want LLM to help you answer? For example, asking questions summarizing critical indicators of business operations may be your domain of inquiry. For that use case, deriving the answer would include tapping a specific set of data sources. It also involves using or accessing a taxonomy or terminology your intended end user would understand and use. An engineer, for example, would have a different lexicon than a field service technician, though both may be involved in the same use case (such as building infrastructure). Training the model to know the language of the business and the different end users is vitally important.

In addition, if you don’t know the questions that your system will ask upfront, you won’t be able to test your models for accuracy effectively. Pre-defining your domain of inquiry will enable you to determine if you’re getting the correct answers or the quality of responses from your LLMs.

For example, you could build a model that gives a “hallucination” response, which involves an incorrect answer that does not fit the facts or information supplied to the AI model. Another example involves AI “laziness,” which comes from an end-user expecting a robust answer but instead gets something much less sufficient, such as a two-word response.

Knowing the questions allows you to identify the data sources you need to get the correct answers and avoid AI hallucinations, laziness, and more. Location is a crucial dimension of data that delivers intelligent engineering by exploiting your data’s geography and location as a unifier of other data types. If you have a location, you can build strategic integrations that bring data from multiple systems together, including customer (CRM), financial (ERP), asset (EAM), and more, to make your answers more robust.

Companies benefit from establishing standards for question formats or prompts that exploit the locational aspect of their business data using geospatial technology combined with generative AI. This combined solution ensures accuracy, adaptability, and usefulness in LLM responses. It makes the correct response structures, including where to find the data.

For example, an OT application might monitor data on a transmission line represented as a straight line, yet that line spans many different communities, from farms to factories to urban centers. With a process known as “vector embedding, AI can combine physical attributes, operational status, geometry, and functional context of the asset to package all the relevant content for performing any number of functions. SCADA schematics alone fall short of delivering that level of detail.

Your generative AI partner

Locana has the technical expertise and real-world experience to help your organization successfully deploy generative AI at your organization. Locana has worked with multiple clients worldwide to build the right architecture, integrations, models, and prompts to ensure security, accessibility, adaptability, and more. Our data scientists, designers, developers, project managers, and industry experts bring in-depth knowledge and skills built on successful deployments. As a TRC company, Locana has the additional investment and engineering expertise to bridge the gap between AI, machine learning, cloud, and other innovations and your business objectives.

Locana’s work with clients in the US and worldwide begins with “owning the customer mission.” By adhering to a listen-first model that includes pen communication and collaboration through all stages of the project lifecycle, Locana works diligently to ensure AI projects deliver on time and within budget.

With Locana as your trusted AI partner, you gain:

  • Deep AI knowledge and experience working with clients
  • Technical expertise in LLM and generative AI techniques
  • A proven track record of delivering successful solutions
  • Dedicated to open communication and ownership
  • Solutions, packages, and patterns that accelerate time-to-value

Start your AI conversations today

Generative AI looks to be a boon for society, and businesses are just now beginning to take advantage at the enterprise level. By employing modern architectural patterns, you can take advantage of this cutting-edge innovation to tear down data silos to deliver critical answers and unparalleled insight across the organization—by simply asking a question.


Visit today to learn more about who we serve, what we do, and the solutions we offer.

Share this post
Related Posts

Become an insider

Sign up for quarterly insights on topics you care about, including GIS, geospatial, enterprise systems, open data and development, and more. We’ll share industry best practices, user stories, and relevant information you can use in your own work.