Many tasks within a marketer’s day-to-day life are prime targets for Artificial Intelligence (AI) optimisation, but understanding how to best utilise AI is crucial.
I recently took Anthropic’s AI Fluency course, covering the frameworks and foundations required to work effectively, efficiently, ethically, and safely with AI systems. The course highlighted certain competencies and ways of thinking that can be applied to many different generative AI tools, so any marketer who currently uses, or is looking to introduce AI to streamline their workload, can benefit from understanding this methodology.
Before you start using AI, you need to understand what AI tools can and can’t do to assist you effectively.
What is generative AI?
Traditional AI models, such as voice assistants like Siri and Alexa, are trained to analyse and categorise, able to complete particular predetermined tasks. Comparatively, generative AI models can create something completely new that didn’t exist before.
Large Language Models (LLMs) are a form of generative AI, trained to predict and generate human language. ‘Large’ refers to the billions of parameters contained within the model that allow it to perform in this way. Popular LLMs include Claude, ChatGPT, and DeepSeek.
Generative AI capabilities
- LLMs are useful for a wide variety of tasks and have the benefit of being able to switch between any task without needing additional training. For example, the same LLM can help you draft an email to a colleague, but also help you understand quantum physics.
- Many modern LLMs can connect to external tools and information sources, using other applications to expand their capabilities further (such as searching the web or processing files).
- LLMs are able to maintain the thread of a conversation, referring back to information you provided in earlier prompts within the same thread.
Generative AI limitations
- LLMs are bound by their training data. An AI’s ‘knowledge cutoff date’ refers to the date after which they have no innate knowledge of the world. For example, if an AI was trained with information up to 31 December 2024, it would not be able to refer to information or data from any point in 2025.
- LLMs need tools like web search to learn more about recent developments after their knowledge cutoff date.
- An AI’s training process does not verify every fact in the training data, meaning an LLM may have been taught incorrect information that it believes to be factual.
- Even when all of an AI’s training data was factually correct, an LLM can make mistakes when piecing together information that it has learnt.
- LLMs generate information based on statistical patterns, which can sometimes produce ‘hallucinations’, an output that sounds plausible, but is actually incorrect.
- Every LLM has a limit to how much information it can consider in a single interaction, known as its ‘context window’.
- LLMs produce a non-deterministic output, which means you’ll get a unique answer each time you ask an AI model the same question.
- This is great for brainstorming and generating diverse ideas, but requires awareness when consistency or accuracy are critical to the task.
How can you interact with AI?
There are three main ways that people interact with AI:
Automation: The AI completes specific tasks based on your instructions.
Augmentation: You and the AI collaborate as creative thinking and task execution partners.
- Humans are better at critical thinking, judgment, creativity, and ethical oversight, whereas AI can provide speed, scale, pattern recognition, and processing abilities.
Agency: You configure AI to work independently on your behalf, establishing its knowledge and behaviour patterns rather than just giving it specific tasks.
Four core competencies for utilising AI
The AI Fluency training lays out the four core competencies required for working effectively, efficiently, ethically, and safely with AI systems.
Delegation
Delegation is the vital first step, helping you determine the best way to interact with AI for the task at hand (automation, augmentation, or agency). The process of delegation can be split into three steps.
Problem awareness: The ability to clearly define your goals and understand what work is needed.
- What are you trying to accomplish, and what does ‘success’ look like?
- What kind of thinking and work is needed to get there?
Platform awareness: The working knowledge of available AI systems and their specific capabilities and limitations.
- What AI models offer the right functionalities for the work you have in mind?
- What are your priorities, and what AI system works best for these? E.g., speed, creativity, depth, or accuracy.
Note that AI is constantly evolving, so it’s best to regularly experiment with the capabilities of different AI models and learn from your own experience.
Task delegation: The strategic process of dividing work between humans and AI.
- What steps could be fully automated, and for which aspects would augmentation create more value than automation?
- What steps should be completed by a human alone?
Description
The description competency refers to how we communicate clearly with AI systems. You can’t just write simple prompts and expect the perfect output. Explain tasks, ask questions, and provide context to guide the interaction in the ways you would like. Build a shared thinking environment where both you and the AI can each do your best work.
Product description: The ability to clearly define the characteristics of your desired output.
- Give the AI the information it needs to deliver what you’re actually asking for, not just assuming what you want.
- Clearly describe what you are expecting the AI to deliver, including context, format, target audience, style, and any other key constraints.
Process description: The ability to guide the AI’s thought process.
- When talking to AI, ‘how’ can be more important than ‘what’.
- Specify certain data to use, key tasks to prioritise, and the preferred order for the AI to carry out its tasks.
Performance description: The ability to define the behavioural aspects of an AI interaction.
- AI tools are interactive systems that behave differently depending on the context provided.
- You must explain exactly how you want the AI to behave.
Discernment
Even the most advanced AI systems can make reasoning errors, produce factual mistakes, or act in unexpected ways. Discernment is the process of evaluating what AI gives you to ensure suitability for your needs.
Effective discernment requires knowledge of the AI system at hand and its typical shortcomings, and enough expertise in the domain of your request to determine the quality of the response.
Product discernment: Understanding whether the output actually helps you move forward with the task at hand.
- Is the output factually accurate?
- Is the output coherent and well-structured?
- Does the output meet your requirements, and is it appropriate to the desired audience and purpose?
Process discernment: Judging the quality of the AI’s problem-solving approach. When assessing an AI’s output, look out for:
- Logical inconsistency.
- Lapses in attention or inappropriate steps.
- Getting stuck on one small detail or trapped in circular reasoning.
Performance discernment: Evaluating how the AI behaves during your interaction.
- Is the communication style appropriate, and is the information at the right level for the intended audience?
- Is the AI’s response appropriate to any feedback given?
- Is the interaction efficient?
Discernment isn’t just about evaluation – it’s also the time to provide feedback to improve the output:
- Specify any issues with the output and clearly explain why these are a problem for the task at hand.
- Provide concrete suggestions for improvement.
- Revise your instructions or examples.
The discernment step of the process may highlight that the AI model is struggling to effectively complete the request. At this point, revisit the first step (delegation). Are you using this AI tool in the correct way for its capabilities and limitations? Is this the right AI tool for the job, or do you need to try a different system?
Diligence
Diligence focuses on ethics and safety, ensuring your interaction with AI is responsible, transparent, and accountable. Ethics and safety are just as important as effectiveness and efficiency when working with AI.
The diligence competency recognises that AI systems and our interactions with them don’t exist in a vacuum. The training course related the use of AI to driving a car – it’s not just about getting from point A to point B, it’s also about following the rules of the road and remaining aware of how our driving affects other road users.
Responsible AI use starts with awareness:
- What are the implications of working with this particular AI tool?
- Who may be affected by what is created, the collaboration itself, or any missed inaccuracies?
- Who has access to the data used to produce this output?
Creation diligence: The ability to be critical and intentional about which AI systems you use and how you use them.
- How was the AI system trained and built, and what data was used to train it?
- Who owns the data that you’re putting into the AI, and who may have access to this data once it’s been shared with the system?
- How does the AI tool align with your organisation’s policies?
Before sharing sensitive data or information with an AI tool, check whether the tool has appropriate data protection policies in place, and whether your organisation allows data sharing in this manner.
Transparency diligence: People have the right to know when AI has played a significant role in content creation, or other decisions that affect them. Transparency diligence is the ability to be open and accurate about AI interaction with relevant stakeholders.
- Who needs to know that AI assisted with this task?
- How should you communicate it, and with what level of detail?
Deployment diligence: The ability to take informed responsibility for the outputs you use or share after they’ve been created with AI assistance. This step is similar to product discernment, but with a higher focus on ethics and safety.
- Check the output for biases.
- Check the usage rights of the output.
Writing effective AI prompts
Effective AI prompting blends familiar human conversation skills, such as being clear, providing well-written context, and giving concrete examples, with a few considerations specific to AI:
- Being more explicit about things that humans could naturally infer
- Accommodating the AI’s limited context window
- Following the AI’s preferred format that makes it easier to process information (if relevant to the AI model you’re using, as more advanced models do not require this step)
It’s unlikely that the first prompt you give an AI will produce the exact results that you’re looking for – and that’s okay! The general iterative process when working with AI is to create a preliminary prompt, assess the AI’s response, and then refine the prompt to tailor a more suitable result. You may need to refine the prompt multiple times until you receive the final ideal output, but this is a great way to learn the kind of information that the AI requires in order to produce the best result. Take these learnings into consideration for future interactions with the AI system.
Foundational prompting tips for effective AI use
The course outlined some core tips for writing effective prompts, which can be used for any generative AI model.
Provide context
Be specific and clear about what you want, why you want it, and who you are. The AI needs to understand enough context about your query in order to produce a suitable response, and clearly specifying how you want the AI to behave can change the way it approaches a task.
Here are two examples:
For explanations: Please explain how rainbows form from the perspective of an experienced science teacher speaking to a bright 10-year-old who’s interested in science.
For brainstorming and feedback: As a UX designer, review this website wireframe and suggest three improvements focusing on user navigation and accessibility.
Specify output constraints
The AI can’t read your mind – you need to tell it exactly what sort of output you’re expecting. What format would you like the response in, and how long should it be? If you’re asking the AI to write a piece of code, what language should it be in? If you’re generating a design, what colours would you like used?
Here’s an example of a prompt using proper output specification:
Create a clean, modern, single-page portfolio website design with smooth scrolling between sections. Include these main sections: About Me, Skills, Portfolio/Projects, Experience, and Contact. Make the navigation menu sticky and responsive, with a hamburger menu on mobile. Use a sunset colour palette and add a dark / light mode toggle in the navigation.
Break down complex tasks
Breaking down complex tasks into simpler steps can make it easier for the AI to work in the way that you’re expecting. This is known as ‘chain of thought prompting’. Modern reasoning models or extended thinking models are often capable of performing step-by-step reasoning on their own, but guiding the process helps ensure the output aligns with your needs.
Take the basic prompt: Analyse this quarterly sales data. You may break it down into the following steps:
I’d like to analyse this quarterly sales data. Please approach this by:
- Looking through our sales records to identify the top-performing products.
- Comparing current quarter results to the previous quarter.
- Highlighting any unusual patterns or trends.
- Suggesting possible reasons for these trends.
Give the AI space to think
The idea of an AI having the ability to ‘think’ gives me the heebie-jeebies, but if you can push past that feeling, it can help strengthen the output for your query. Allowing an AI space to think can produce a more thorough and well-considered response.
Here’s an example of how you could phrase this (to be added at the end of your prompt):
Before answering, please think through this problem carefully. Consider the different factors involved, potential constraints, and various approaches before recommending the best solution.
Ask the AI for help with prompting
Don’t forget that you can simply ask the AI itself for the best way to format a prompt. For example:
I’m trying to get you to help me with [goal]. I’m not sure how to phrase my request to get the best results. Can you help me craft an effective prompt?
When specifying your goal, make sure to include as much detail and context as possible. The AI needs to understand what exactly you want it to accomplish in order to generate the most suitable prompt.
Still receiving inadequate responses?
If you’ve exhausted the above tips and are still finding the AI’s output unsuitable for your needs, try:
- Asking for variations: Can you give me three different versions of this?
- Requesting different formats: Instead of bullet points, can you present this as a paragraph?
- Checking the AI’s confidence: How confident are you about this answer?
- Resetting the conversation in a new thread – this may give better results than trying to correct a conversation that’s gone off track throughout multiple rounds of prompting and output.
Practising AI fluency
While this blog post gives an overview of the key discussions in Anthropic’s AI Fluency course, the course itself gives a lot more depth on these topics, with more examples and some physical tasks to complete with your AI system of choice to put these into practice.
The course is completely free and can be completed in just a few hours. Take it for yourself here.
Anthropic have also produced a handy glossary of terms relating to AI fluency. It can be accessed here.