Artificial Intelligence (AI): What It Is & Tips to Tell What’s Real 

Written by Halifax Public Libraries' Regional Technology Team

About Media Literacy Week

In an increasingly complex digital world, we often find ourselves asking, “wait… what?” Tools like AI or social media, and challenges such as misinformation or online hate, can seem difficult to navigate. Media Literacy Week, opens a new window is here to answer Canadians’ questions about being online in changing times.

Hosted by MediaSmarts, opens a new window, Media Literacy Week is a national event to promote digital media literacy, with activities and events taking place in classrooms, libraries, museums and community groups across Canada. 

Read on for some tips and tricks from our Halifax Public Libraries' Regional Technology Team, to help you better understand the digital landscape.


What is AI? 

Artificial Intelligence, or AI, is computer technology that can do things we usually associate with people, like writing or creating pictures. It doesn’t think like a human, though. Instead, it learns patterns from large amounts of data. For example, if you show it thousands of cat photos, it learns to recognize cats. If you give it millions of sentences, it learns to write new ones that sound human based on the patterns in the sentences.  

You’ve probably already used AI without realizing it. When a streaming service recommends a movie, or your email automatically filters out spam, that’s AI at work.  

AI doesn’t always get it right, though. Maybe you’ve seen TV recommendations that don’t match your interests, or emails incorrectly flagged as spam that should have ended up in your Inbox. These examples show that while AI can be helpful, it isn’t perfect.

Misinformation and disinformation 

AI doesn’t actually “know” anything. It simply predicts what comes next, which means it sometimes makes mistakes. 

If people believe these mistakes and share them, that’s called misinformation. Misinformation happens when false ideas are spread by accident. For example, an AI might generate an incorrect news headline saying a celebrity had passed away when they didn’t. If people share it without checking, it spreads false information without meaning to. 

When someone uses AI to create false content on purpose, it becomes disinformation. For example, someone could use AI to make a fake video of a political figure endorsing a product or a policy they never supported.  

How to tell what’s real 

AI is exciting, but it also raises an important question: How do we know what’s real and what’s not? 

Before you trust what you see online, take a moment to pause and check. Start by asking yourself these questions: 

  1. Who created it? Check if it comes from a reputable source, like a recognized news outlet, organization, or official account. 
  2. Do other sources report it? Look for confirmation from other trusted sources to see if the information is consistent. 
  3. Can you find the original? Trace photos, videos, or claims back to where they first appeared to see if they are authentic.  
  4. Does it seem too extreme or surprising? If something feels shocking or unbelievable, it’s worth double-checking before sharing. 

These steps can go a long way in spotting misinformation and disinformation. 

AI is an incredible tool for creating and learning, but it can also spread mistakes or be used to trick people. Staying informed and double-checking what you see helps keep you safe. 

AI isn’t going away, but with awareness and care, we can all use it wisely. 


Artificial Intelligence (AI) Resource List

Artificial Intelligence (AI) is everywhere these days, but what is it? This all-ages resource list includes books and websites which can help explain what AI is and how to use it safely.











View Full List