Quick navigation
Table of contents
Rules for the responsible use of AI
Rules for the responsible use of AI
At this company, responsible use of artificial intelligence adheres to the following rules:
- No wholly AI-generated content should reach customers, colleagues, or external partners. AI-generated writing, imagery, and code should pass through a rigorous human refinement and fact-checking process before it’s shared.
- No sensitive information about our customers, our colleagues, our company, or our products should enter a commercial AI tool. This applies to any tool that hasn’t been explicitly approved for use at the company. Otherwise, if it’s not public knowledge, it shouldn’t be fed to an AI model.
- AI is a catalyst for expertise, not a replacement for it. Use AI to amplify your efforts and extend your insight, not as a substitute for your professional judgement. What you put out into the world is what you're known for—no one wants to be known for cutting corners. Up-skill, don’t uproot.
Understanding artificial intelligence
What if you had a collaborator who could:
- Transform meandering drafts into sharp narratives,
- Surface hidden patterns in customer feedback,
- Generate creative alternatives to your concepts, or
- Pressure-test your ideas by playing devil's advocate?
That’s the promise of today’s artificial intelligence tools. Think of AI as a partner, a way to boost the impact of your efforts—not as a replacement for your wisdom and ingenuity.
This guide will help you understand how AI works so you can take advantage of its strengths and anticipate its limitations.
What is AI, anyway?
Today’s artificial intelligence isn't actually intelligent.
Artificial Intelligence (AI) is an optimistic euphemism for a set of powerful technologies that can generate text and imagery, write code, solve problems, and evaluate vast data sets. It’s a representation of where we want the technology to end up rather than where it is today.
Modern AI, including Large Language Models (LLMs), Machine Learning (ML) systems, and Generative Adversarial Networks (GANs), are all just sophisticated prediction engines—they recognize patterns and predict what should come next.
In other words, AI doesn’t think.
When we talk about modern LLMs improving, they aren’t learning to reason, they’re learning to mimic our expected output better. They’re building a deeper understanding of the patterns in our language, data, workflows, and expectations.
That’s why the more information and context an AI model has, the more powerful and accurate its output becomes.
Since almost all of the initial training data showed a skin imperfection with a ruler beside it to help gauge scale, the AI immediately picked up on the pattern and drew a logical—but nonsensical—conclusion: that rulers cause skin cancer.
These sorts of outcomes illustrate the value of good training data. The more information an AI model has about the nature of the problem and our existing understanding of the context surrounding it, the more successful it can be at helping us. This is why companies developing AI models care so much about the quality and quantity of training data they have.
Today, AI models are capable of detecting melanoma with 99.5% accuracy.
How AI models create things
The way AI models generate content is similar to the way human beings do. After all, we’re also pattern-matching engines—only we’re squishier and more thoughtful.
You can think of artificial intelligence like an extremely well-read colleague who:
- Has absorbed millions of documents, images, conversations, and other kinds of knowledge as part of their training.
- Can recall things instantly, often combining aspects of the training in novel ways.
Unfortunately, this colleague lacks real-world experience or any genuine understanding of the material. Creativity is a synthesis of knowledge, experiences, and feelings. AI models lack experience and feeling, but they can draw from more knowledge than any human mind could contain.
When AI models receive your input, they evaluate it through the lens of their training, then draw from their knowledge to craft a response that’s statistically likely to fulfill your request. Their success depends on the quality and quantity of training data, the relevance of that data to your request, and the quality of the prompt you offer as input.
How AI training works
A common misconception is that AI models “cut and paste” from their training data, outputting a collage of preexisting content. But that isn’t how they work.
To an AI model, training data is less like a filing cabinet and more like a statistical soup. They don’t understand language in terms of words, or imagery in terms of pixels.
During training, those things are broken down into a sort of digital mulch called tokens. Those tokens flow through a series of processing steps where they influence interconnected numerical values. Each of those values, called a weight, represents a specific aspect of how the model combines tokens into an output. How it generates an answer. Hence the name for this mechanism: a Generative Pre-trained Transformer, or GPT. Modern examples of this kind of LLM have trillions of weights.
At first, the model’s understanding is crude. Its weights are random, so its predictions are little more than guesses. But through repeated exposure to vast amounts of data, guided by mathematical and human-assisted reinforcement of what a correct output looks like, the model gradually refines these parameters, learning the intricate relationships between tokens.
By the end of training, a model’s weights have been adjusted to the point where it can reliably generate text, imagery, and audio, recognize patterns, and make useful predictions based on the knowledge it’s been fed.
When models output results that closely match something in their training data, it’s because the prompt was specific enough to lead the model toward the best possible synthesis available from its training data. Even if that matches the way a human had previously interpreted the task.
The richer the training data, the less likely this phenomenon becomes.
Why AI content sometimes feels “off” (and why it’s often wrong)
One of the reasons AI makes us uncomfortable is that it produces a kind of uncanny valley effect.
It reflects our own tendencies back at us in a dispassionate way. This can be especially troubling when biases in its output reflect biases in its input. AI models are trained on human culture and creativity, but when those elements are reinterpreted without the inherent sympathy of the human touch, the result is familiar but discomfiting.
More importantly, it’s often wrong.
This is less surprising now that you understand how these algorithms work. They give you the most likely answer, not necessarily the right one. They have no reliable way to evaluate whether something they generate is true or not.
Several techniques are being developed to work around this problem, including:
- Retrieval Augmented Generation (RAG).
- Reinforcement Learning from Human Feedback (RLFH).
- Constitutional AI.
But for now, these sorts of errors – called hallucinations – remain a fundamental limitation of the technology. Until we sort that out, it’s best to err on the side of caution.
This is why our first rule exists: AI content needs human verification before it goes anywhere.
The complicated ethics of generative AI
As AI becomes more prevalent in our work and society, we have to confront several legal and ethical questions that determine what it means to use these tools responsibly:
- Who owns the output of generative AI tools? Who takes credit for their creativity?
- Who’s responsible when AI makes mistakes?
- How do we protect intellectual property rights in the pursuit of ever-larger databases of training data?
- If AI can produce work faster and better than the humans whose work it’s trained on, what does our role become?
- Will future training data become recursively dependent on AI-generated materials? What implications would that have for the richness and utility of the output?
- Will knowledge efforts continue to target people or will they shift to the AI models those people are going to ask for advice? Will it become easier or harder to find trustworthy information?
- How do we sustainably meet the power and compute requirements these systems need to operate?
- Can we advance this technology without sacrificing our dignity to do it?
Challenges like these help explain why we need clear boundaries around AI use. While we can't put the genie back into its bottle, we can commit to using AI in ways that respect creators, protect quality, nurture taste, and uplift human agency.
Using Large Language Models (LLMs)
How not to use AI
Before we tackle using these tools effectively, let's be clear about what kinds of usage we don't condone. You may not use AI to:
- Generate content that infringes on company rules or governing law.
- Create or use deepfake imagery of real people without their consent.
- Produce deliberately misleading or manipulative materials.
- Make or influence critical decisions without full human oversight and accountability.
- Assess, profile, exploit, or displace workers.
Different kinds of AI tools
This guide is mostly focused on Large Language Models, or LLMs, which power the majority of publicly available AI tools. But there are other categories of AI technology you may encounter:
Specialized AI tools focus on specific tasks rather than general-purpose interaction. These can help with things like:
- Image creation (DALL-E, Midjourney, Stable Diffusion).
- Code completion and analysis (GitHub Copilot, Amazon CodeWhisperer).
- Speech recognition and synthesis (Whisper, ElevenLabs).
- Translation (DeepL, Kagi Translate).
- Data analysis and visualization (Tableau Pulse, Microsoft Power BI).
Multi-modal models can process more than one kind of input and output. Most commonly, you would use them to work with text and imagery together, rather than needing to use separate models for each task. Large-scale commercial models like ChatGPT 4 and Google Gemini are now multi-modal.
Agentic models, or AI agents, are an emerging application of artificial intelligence that aims to complete tasks more autonomously. Unlike traditional AI models, agents can plan sequences of actions, use multiple tools, and persist toward goals over time. They're designed to operate independently, but they still require careful human oversight.
While different types of AI serve distinct purposes, they often work together in modern applications. For example, a customer service chatbot might use an LLM for conversation, speech recognition to process voice input, and specialized models to handle things like appointment scheduling or order tracking.
Use a local model whenever possible
AI tools are typically made available in one of two ways: as commercial products that operate under a Software-as-a-Service (SaaS) model, or as open-source software that you can download and run on your own—often for free.
A key difference is that open-source models can be run locally, on your own devices, entirely offline—without sending data to a foreign server. This requires powerful hardware, but has obvious data privacy and security benefits.
Consider using open-source models like Llama 3 instead of commercial ones like ChatGPT. This reduces the risk of unintentional data leaks and frees you from parsing complex Terms of Service documents that may claim ownership over the output.
In fact, you can be more lenient about applying Rule 3 around sharing information if (and only if) you choose to run an AI model locally on your own hardware. This is why we strongly recommend doing so whenever possible—you can tell it more and get better results.
While commercial models are usually closer to the state of the art, we’ve reached a point where issues with an AI’s output are more likely related to the quality of the prompt than the quality of the model.
How to use AI effectively
Clear communication is the most important skill for getting good results from AI tools.
“Prompt engineering” is essentially a combination of good reasoning and knowing how to describe what you’re after. Forget about obscure syntax and try to interact with AI models the way you would with a human collaborator:
- Explain the situation I’m working on a presentation and need some help organizing my points…
- Talk about your goal We’re more interested in awareness than conversion for this campaign, so let’s focus on ideas that spark discussion and sharing…
- Provide as much relevant context as you can Here’s our database of meeting minutes so you can factor the team’s previous discussions in to your recommendation…
- Specify the format I need the resulting text in Markdown format, with each section separated by a checkmark emoji…
- Ask for any clarifying questions before starting What additional information would help you give me a better response?
Basic prompting tips
Here are some high-level guidelines to consider when working with LLMs:
- Break complex tasks into smaller steps. Understanding what’s involved in solving your problem helps you as much as it does the AI.
- You don’t have to accept the first response. Ask for further refinement, offer guidance, and work with the model to fine-tune your results.
- Offer examples. Show, don’t just tell. Give concrete examples of what you want and don’t want.
- Be clear about what sort of output you want. If it’s a list of 10 items, ask for that. If it can’t be longer than 250 words, say so in your prompt. It might not get it right, but it’ll get it closer.
- Make it a two-way conversation. Treat it like a dialogue. Ask for advice. Have it play devil’s advocate. Challenge it to challenge you.
Advanced tips
- Tell the AI to adopt a role. This feels strange but it works. Start your prompt by telling the AI that it’s the right expert for your request (i.e. you’re a talented B2B marketing writer who explains complex subjects concisely…)
- Try more than one model. ChatGPT, Gemini, Claude, Llama...each model has strengths and weaknesses, and the technology is moving so quickly that they’re constantly leapfrogging one another in capabilities. If you’re not getting what you want from one, ask another.
- Use multi-turn prompting for better depth. Instead of cramming everything into one huge prompt, guide the AI through a conversation. Start with a broad question, refine based on its response, and steer it toward the intended output.
- Prompt it to think step by step (or argue with itself). AI can rush to an answer without considering all angles. Get better results by asking it to walk you through its reasoning. Or go further: “Give two opposing viewpoints on this, then reconcile them.”
- Combine AI outputs for better results. Instead of relying on a single response, generate multiple variations and merge the best parts. AI can help with that part, too: “Here are two versions—what are the strengths and weaknesses of each?”
Further reading
Interested in learning more? Here are some helpful resources worth reviewing as a jumping-off point for further exploration.
On the underlying mechanisms and how it all works
Will Douglas Heaven What is AI?
3Blue1Brown Large Language Models explained briefly
Timothy B. Lee Large language models, explained with a minimum of math and jargon
Stephen Wolfram What Is ChatGPT Doing … and Why Does It Work?
On improving your success with AI content generation
Anthropic Prompt engineering overview - Anthropic
On how to think about AI’s impact and potential
Bookshop.org Futureproof: 9 Rules for Surviving in the Age of AI
Andy Masley Using ChatGPT is not bad for the environment
James O'Donnell AGI is suddenly a dinner table topic
From Burnout to Balance: AI-Enhanced Work Models for the Future
NOEMA AI Could Actually Help Rebuild The Middle Class | NOEMA
On the risks & challenges of AI proliferation
Juha-Matti Santala Be careful with introducing AI into your notes
The Expanding Dark Forest and Generative AI
Al Jazeera English The AI series: AI and Surveillance Capitalism | Studio B: Unscripted
Stephen Wolfram Will AIs Take All Our Jobs and End Human History—or Not? Well, It’s Complicated…
Part 1: The Past, Present, and Possible Futures
Lee Gesmer Copyright And The Challenge of Large Language Models (Part 1) • Mass Law Blog
On AI the design of AI-powered tools
Nielsen Norman Group Scope in GenAI Features
Amelia Wattenberger Why Chatbots Are Not the Future of Interfaces
Language Model Sketchbook, or Why I Hate Chatbots
Nielsen Norman Group Scope in GenAI Features
On AI governance
Alaura Weaver The importance of AI ethics in business: a beginner’s guide