AI & GTM STRATEGY
A Foundation for Understanding AI's Impact on Your Business
An analysis by Ben Scoones, created for Revenue Innovations
TECHNOLOGICAL CHANGE
Why AI Matters Now
When OpenAI made ChatGPT-3 available to the public in November 2022, with its astounding capacity to understand and interpret natural language prompts and produce coherent and accurate responses, it became clear to many professionals the colossal impact AI would have on work, organizations, and industries. While exciting for many, this development also elicits fear and uncertainty regarding the future of professions and organizational viability.
This concern is natural when on the horizon of great upheaval, but the present is not unique. Technology has been developing for centuries, and in the past there has been technological change so significant whole industries have been warped—the development of machinery in the industrial revolution, or the widespread use of computers in the 70s and 80s. We can look back and see how beneficial they have been for industry and the economy as a whole, allowing fewer to do more with less.
Now, almost three years on, the potential of AI in work is no longer theoretical but is being fully realized. AI is being utilized across almost every industry, from creating new opportunities on the bleeding edge to mundane but valuable administrative work. However, while adoption is well underway and already delivering value, it is by no means complete.
Understanding how best to leverage the technology, and doing so responsibly, remains critical for success.
THE NATURE OF AI
Artificial intelligence is, at its heart, a simple concept: a machine which can perform cognitive functions similar to humans—mathematical calculations, playing chess, differentiating between images, or having conversations. Some tasks are harder than others. Computers have performed calculations for decades; IBM's Deep Blue defeated chess master Gary Kasparov in 1997; image classification has developed greatly over the last decade, even being applied in healthcare.
Until recently, machines struggled with natural language tasks. Models capable of working with language existed but were restricted to specific tasks and weren't convincing anyone of human-level comprehension. Now, however, language models are markedly more sophisticated and capable of performing vast varieties of tasks with clever prompting. Related technologies like AI agents extend capabilities beyond text generation to digital action.
LARGE LANGUAGE MODELS
OpenAI’s ChatGPT, Meta’s Llama, Google’s Gemini, Anthropic’s Claude, and X’s Grok all belong to a class called large language models (LLMs), which fall under the broader category of generative AI—AI that can create new content (text, audio, images).
LLMs are trained on vast amounts of text-based data to learn language patterns. Building an LLM requires deep expertise and significant resources, but using one is now widely accessible. Foundation models like ChatGPT can perform a range of tasks without additional training, making them adaptable across many contexts.
THE LLM LANDSCAPE: DIVERSE MODELS, DIVERSE USES
The LLM ecosystem is highly dynamic, with different models excelling in different areas. Some prioritize rapid response and general knowledge; others focus on deep reasoning, real-time awareness, or integration with multiple data types. Open-source options allow greater customization, while multimodal models handle text, image, and other formats for richer experiences.
GETTING TO KNOW THE MAJOR LLMs
FOUNDATION MODELS & THE VALUE OF DATA
Democratization & Differentiation
Foundation models democratize generative AI: either the model is sufficient for your use case and the barrier is learning how to use it (prompt engineering), or it requires fine-tuning with additional data, which is challenging but not as great as building from scratch. The growing number of fine-tuned models means one likely already exists for your use case.
Moreover, significant development has occurred in technologies surrounding LLMs. While LLMs may be restricted to generating text, many models now have the ability to use tools, granting them ability to perform tasks using external resources (e.g., web-search). Models can be used in compound AI systems where they fulfill specified purposes or, as AI agents, intelligently select how systems operate. With these developments—tools, agentic AI, compound systems—the range of applicable tasks has increased dramatically.

THE LIMITATION ON AI
Increasingly, the limitation on AI is not the technology itself but two factors: an organization's ability to adapt appropriately and responsibly, and the additional data provided to the model.
DATA IS (THE) DIFFERENCE
At its most basic, data is difference, the distinction between what is and what could have been. The coin toss came up heads; objects fall when dropped; consumers don't always make optimal choices. What makes these noteworthy is that each didn't need to happen. The observation of facts informs us how the world works, how people behave, how events are related. Models use information the same way.
Generative AI's ability to understand language, retrieve information, and generate content is impressive. But fundamentally, what generative AI does is no different from statistical and machine learning models of past decades. At bottom, these models learn relationships between variables and produce outputs based on what they've learned. As such, AI performance is fundamentally limited by the data, specifically the information content in that data, it has access to.
The implications are significant: unlike AI, data is not democratized, and not all data is of equal value.
Not all data is equally valuable. Consider a US software business with 500 clients aiming to grow to 750 through cold calling. They need to prioritize leads based on product, size, valuation, or technological sophistication. They have three data options: (1) 500 businesses including many current clients; (2) 400 businesses, none current clients; (3) 1,000 UK-based businesses. Option 2 is best—Option 3 has most data but isn't relevant; Option 1 has less novel information.
DATA ADVANTAGE & AI APPLICATIONS
Competitive Edge & Value Creation
What makes data valuable depends on factors including relevancy, novelty, ease of use, and scarcity. Much data is publicly available, and models are trained in part on public data. But much remains inaccessible—technically challenging to access, behind paywalls, unavailable for purchase, or not stored. If data is unique and scarce, it's valuable to those who can access it. If your organization has privileged access to novel data, you have competitive advantage, as this data can improve model performance or create entirely new strategies and products.
Many organizations are recognizing this opportunity, leveraging unique data and knowledge with AI to gain advantage. Enterprise knowledge integration—identifying, centralizing, and making organizational knowledge accessible—is one example. When done well, AI performance for tasks relying on this information improves significantly, as does knowledge of organization members when AI serves as the knowledge-source interface. With steadily lowering training costs and increasingly sophisticated development tools, building your own model or AI system has never been easier.
APPLICATIONS OF AI
AI's potential value is becoming reality. According to HAI's 2025 Index Report, organizations using AI increased from 55% in 2023 to 78% in 2024. Investment is massive, over $100 billion invested by private US organizations in 2024.
Percent of Organizations Using AI Globally
The technology impacts multiple industries and capacities, with job postings increasing across information services, finance, manufacturing, agriculture, and in functions including marketing, product development, strategy, service operations, and software engineering.
FIVE KEY APPLICATIONS
Ways to Create Value
1
Routine Tasks & Efficiency
AI's ability to generate context-relevant natural language makes it immediately useful for rote tasks across almost every industry: writing emails, documentation, adapting content, summarizing meetings, basic information lookup.
2
Research & Information Retrieval
AI performance in gathering and retrieving information has improved significantly. Combined with improved competency on complex tasks, AI is now exceptional for conducting research—industry trends, market analysis, academic research.
3
Ideation & Exploration
AI is tremendously useful in ideation. While not truly creating something new, it's been trained on vast data, making it extremely useful for exploring possibilities and surfacing ideas.
4
Complex Workflows & Copilots
AI's flexibility, particularly in compound AI systems, is valuable in complex contexts where multiple tasks must be completed for larger goals. Many AI copilots exist across domains.
5
Democratizing Technology
AI can open doors to non-AI technologies previously inaccessible. The low barrier to using AI allows non-technical users to create digital technologies.
A comprehensive approach to applying AI will consider all of your organization. It presents value in almost every function, particularly those digitally driven or information-centric. Application within many functions is already being considered, meaning likely immediate value if still in adoption phase.
While many tasks fit the same mold, almost every context is different. Though AI has given rise to new products, services, and ways of working, many more remain to be explored. A comprehensive approach should not merely look at what exists but continue considering what might exist. With AI development still rapid, maintaining awareness of and readiness for developments remains a value source.
RISKS OF AI
Risks surrounding AI differ from when technology first became available in late 2022. Initially, greatest safety risks concerned data privacy and unclear governance and compliance. Over the last few years, these risks have lessened significantly. All major model providers are clearer about what user data trains their models; many like OpenAI and Anthropic generally don't use user inputs to train models. Much work around AI governance has provided resources ensuring organizations can responsibly and safely use, build, and maintain AI applications and systems, examples include NIST's AI Risk Management Framework and ISO/IEC 42001:2023.
But some risks remain, and new risks are emerging as technology matures and adoption spreads. We review six main risk areas: data privacy, opacity, quality of output, security, malicious actors, and social impact.
Once built, models can learn from and incorporate data they've seen. Short-term memory is common, models remember inputs briefly (e.g., duration of chat session) and consider these when providing answers to subsequent prompts. Long-term memory can be implemented where interactions are stored and referenced permanently. Provided this data is only referenced during your personal sessions, data privacy is of little concern (though data leaks remain a risk as with all online activity).
However, if data is somehow made available to other users through the model, this is a greater issue, potentially exposing private data (PII, IP, or other sensitive information) to unintended parties. This is a known risk if model providers use user-submitted data to retrain models. While providers address these issues, it remains incumbent on users to ensure their company's data remains secure.
AI RISKS: PRIVACY, OPACITY & QUALITY
Understanding Key Vulnerabilities
1. DATA PRIVACY
Data privacy is the primary concern for many AI users. All generative AI models are initially trained on datasets to build them. Data privacy here isn't concerning since models aren't trained on anything not publicly available or approved by creators.
If providing a model to customers, even if you didn't build it, many considerations are worthy of attention: How will you protect privacy of users' data, both personal and proprietary? How can you help customers use your model responsibly by providing insight into how it's generating outputs and from which sources? How can you verify output quality to mitigate bias and misinformation? Is there any way to minimize use of your model for malicious purposes, with safeguards that cannot be bypassed? Will a customer-facing model expose your IP to customers or introduce other liability?
2. OPACITY
Opacity refers to users having little or no insight into how models generate outputs. This poses challenges regarding output quality. If unable to see from where and how the model generated its output, you must either trust it as sufficient and appropriate for your intended use case, or validate it yourself, reducing efficiency benefits gained from using generative AI. Additionally, exact data sources used to train models are often not entirely public, adding uncertainty regarding output quality. That said, AI transparency, explainability, and interpretability are becoming greater focus areas. Elements like citations of factual content and system trace data are being added to some AI applications, and some model providers are sharing training data sources.
3. QUALITY OF OUTPUT
Quality of output is the risk users are most familiar with, with hallucinations and bias being primary issues. Hallucinations—when models generate false, nonsensical, or misleading information—are often easy to catch for domain experts and the AI-literate. However, for less knowledgeable or experienced users, this represents significant issues. Over-reliance on technology can introduce bugs or incorrect information into work with insidious consequences. Furthermore, inaccuracy isn't the only quality issue; depending on how models are built and on which data they're trained, they can be biased—discrimination against particular groups, lack of representative data in particular domains, or in model behavior favoring certain response types.
NIST and ISO/IEC standards offer assistance, as does ongoing work of many political institutions across the globe like the US government and EU, but more work needs to be done around AI regulation, and it can only go so far in protecting against malicious actors.
AI RISKS: SECURITY & SOCIAL IMPACT
Managing Critical Challenges
4. SECURITY & MALICIOUS ACTORS
Malicious actors can include users, service providers, and your own employees. Users can exploit and manipulate AI systems through prompt injection or jailbreaking, attempting to change model behavior to produce unintended or harmful outputs. Input guardrails, strict permissions controls, monitoring, and red teaming can help identify and mitigate this risk. Within organizations, employees can use AI tools without organizational knowledge or consent—referred to as shadow AI. This creates compliance issues, risks data privacy, generates hidden costs, and is best managed by combining employee education with proactive definition and provision of policies and acceptable tools.
Shadow AI can also be a risk outside organizations, with service providers making use of AI without your knowledge or consent, potentially breaching contracts and depreciating service quality.
5. SOCIAL IMPACT
Like other great technological innovations, AI will have huge impact on the workforce and individuals within it. While some job cuts have been directly attributed to AI, much of the fear around AI replacing workers appears more theory than reality presently. AI is clearly aiding and accelerating people in their work, but largely isn't replacing them—the capability for AI to be autonomous isn't yet there in many areas. Even if AI replaced, say, 20% of what workers do, expecting corresponding employment drop ignores potential for new work and economic growth to come in its place. Writers and coders can look forward to less time on rote prose and code, more time on ideation, design, and judgment.
As AI continues integrating more into business, how this is done will require considerable thought, particularly regarding the often overlooked value of human capital within organizations.
FLOURISHING WITH AI
Most organizations now use AI in some capacity, meaning the outstanding question for most is not how to get started but how to flourish in it. Here are steps any organization can take to ensure they're prepared to take advantage of AI not just now but also in the future.
01
MAINTAIN AWARENESS & PATIENCE
The dust hasn't settled around AI. Over past years development has come thick and fast—quality of available models, sophistication of related technologies, and proliferation of AI-powered businesses all constitute meaningful progress. These developments don't appear to be slowing down. This is important to be mindful of when refining your AI strategy. It's wise to be aware of major developments in the industry as these naturally have significant effects on what AI can do and how well. Because change is happening quickly, shiny new objects appear constantly, and it's important not to be distracted. It can be valuable to remain patient and stick to your goals, waiting to see what transpires and how the market responds before discerning whether adopting something new or shifting priorities is really the best decision. Some may develop precisely what you're looking for while you wait. This takes wisdom in understanding your organization, your team's skills, and your place in the market.
02
CULTIVATE AI LITERACY
Building a foundational understanding of AI across all levels of your organization is crucial. This involves not only training employees on how to use AI tools effectively but also educating them on AI's capabilities, limitations, and ethical implications. Fostering an environment of curiosity and knowledge-sharing helps demystify AI, reduces apprehension, and empowers teams to identify new opportunities for integration and innovation responsibly.
03
ESTABLISH CLEAR POLICIES
As AI adoption grows, clear governance and policy frameworks become indispensable. Develop comprehensive guidelines for the responsible and ethical use of AI, addressing critical areas such as data privacy, security, intellectual property, and compliance. These policies should guide employee behavior, mitigate potential risks, and ensure that AI initiatives align with organizational values and legal requirements, protecting both your business and its stakeholders.
04
IDENTIFY & MEASURE OUTCOMES
To truly flourish with AI, it's essential to define success metrics from the outset and rigorously measure the impact of AI implementations. Beyond initial enthusiasm, focus on quantifiable outcomes like efficiency gains, cost reductions, enhanced decision-making, or improved customer experience. Regularly assess these metrics against your strategic goals, allowing for iterative refinement and adaptation of your AI strategy to maximize its value and ensure continuous improvement.
YOUR PATH TO AI EXCELLENCE
Implementation Framework
If you're developing your own AI products, there are additional steps to ensure the service you're providing is safe, secure, and of good quality. This includes proper handling of hallucinations and bias, care over customer data, and monitoring and evaluation of AI systems to ensure they're being used responsibly and outputs are both good and safe.
While this may sound daunting, this is an important part of mature organization's AI operations. These policies not only keep you and your customers safe and provide good experiences, but clear procedures also give your team freedom to innovate and create value without constant and oppressive micromanagement. Without clear boundaries, it's easy for innovation to be stifled and spontaneous ideas to go unexplored because there's simply no proper or accessible outlet.

Being able to show clearly that AI has boosted efficiency in one business area, and how and why, is a powerful tool for increasing adoption and inspiring similar applications. If providing an AI product or service, this provides a clear signal that cuts through the speculation and vague promise with which the market is rife.
CONCLUSION
The day-to-day of business and the nature of work has changed and is still changing. This guide provides understanding of why AI has caused this change and how you and your organization can mature alongside it, ensuring you can use it as a tool for growth and innovation as this change continues. First, identify use cases within your existing organizational structure and work processes. Application of AI here is the low hanging fruit and may offer immediate efficiency gains with little investment.
REFERENCES
  • Zaharia, M., Khattab, O., Chen, L., Davis, J. Q., Miller, H., Potts, J. Z., Carbin, M., Frankle, J., Rao, N., Ghodsi, A., 2024, The Shift from Models to Compound AI Systems, Berkeley Artificial Intelligence Research
  • Maslej, N., Fattorini, L., Perrault, R., Gil, Y., Parli, V., Kariuki, N., Capstick, E., Reuel, A., Brynjolfsson, E., Etchemendy, J., Ligett, K., Lyons, T., Manyika, J., Niebles, J. C., Shoham, Y., Wald, R., Walsh, T., Hamrah, A., Santarlasci, L., Lotufo, J. B., Rome, A., Shi, A., Oak, S., 2025, The AI Index 2025 Annual Report, AI Index Steering Committee, Institute for Human-Centered AI, Stanford University
  • Callaway, E., 2024, Chemistry Nobel goes to developers of AlphaFold AI that predicts protein structures, Nature
  • NIST, Artificial Intelligence Risk Management Framework, 2023, NIST
  • Challenger, Gray & Christmas, Inc. (2025, July 31). The Challenger Report July 2025.
READY TO PUT AI TO WORK IN YOUR GTM?
Let's Discuss Your AI Strategy
You’ve seen how AI can reshape the way GTM teams plan, operate, and win. The next step is finding where it fits in your world.
At Revenue Innovations, we help B2B organizations apply AI thoughtfully to sharpen strategy, accelerate execution, and strengthen the human judgment that drives lasting results.
If you’re ready to explore what that could look like for your team, we’d love to continue the conversation.
Learn more about our services: