Generative Artificial Intelligence Meaning (For Beginners)

Looking for the Generative Artificial Intelligence (AI) meaning?

Generative Artificial Intelligence (AI) is a type of AI that creates new content.

This could be videos, text, images, audio or code.

It does this by analysing training examples and learning its pattern and distribution.

Overview

I’ve divided this definition guide into 8 parts:

  1. What is Generative Artificial Intelligence?
  2. What’s the Difference Between AI, Generative AI, Large Language Models, and Machine Learning?
  3. How is Generative AI different than traditional machine learning?
  4. How Does Generative AI Work?
  5. What Can Generative AI Do?
  6. What Are The Benefits of Generative AI?
  7. What Are the Limitations of Generative AI?
  8. Further Reading
  9. Conclusion

Generative Artificial Intelligence Meaning

Imagine a painter who can create captivating art after studying masterpieces.

Or a chef who invents perfect dishes after studying different cuisines.

Generative Artificial Intelligence (AI) work on a similar principle.

It learns from billions of data points (images, text, videos, audio or even code).

And like an artist with a blank canvas, it generates brand new content that never existed before.

Generative AI can dive deep into the patterns of the information it learned.

It can generate original output without only guessing from past historical data.

It creates content that it wasn’t explicitly programmed to produce.

Here’s an example you might be familiar with:

Imagine if you wanted to search for the best AI tool for product photos.

You would go to Google and start typing, “What is the best AI tool for…”

And this starts to happen:

Screenshot of google autocomplete for the term "what is the best AI tool for"

Google’s AI is predicting what you’re looking for.

How?

It’s not only because it has billions of data on user search histories and behaviours.

But it’s also because it can grasp the context of your words as you type them.

It’s like a best friend that can tell what you’re about to say.

tom cruise and ben stiller meme

What’s the difference between AI, Generative AI, Large Language Models, and Machine Learning?

When I first studied Artificial Intelligence, all these different terms confused me:

  • Artificial Intelligence
  • Machine Learning
  • Large Language Models and
  • Generative Artificial Intelligence (Gen AI)

Let’s break this down so you can understand better.

Here is how Dr. Gwendolyn Stripling from Google defines Artificial Intelligence:

“Artificial Intelligence (AI) is a discipline, like Physics. AI is a branch of computer science that deals with the creation of an intelligent agents which are systems that can reason, learn and act autonomously.”

To simplify, AI is like the umbrella term.

It’s the big picture that’s all about making machines smart.

Whether that’s getting them to talk, learn or even make decisions.

It’s replicating or simulating any aspect of human intelligence.

Machine Learning (ML) is a subset of Artificial Intelligence.

It’s the method we teach computers to learn from and interpret data without explicitly programming it for every task.

You can think of it as teaching a computer to learn through experience.

Machine Learning has 2 primary classes:

  1. Supervised Learning – Imagine a student learning with the help of a tutor. That student would get constant, specific feedback. Similarly, supervised ML algorithms learn from labeled data. They use this as a reference to predict outcomes.
  2. Unsupervised Learning – This is like studying. You learn without any tutor or direct guidance. Unsupervised algorithms go through unlabeled data to find hidden structures or insights.

Generative Artificial Intelligence (Gen AI) takes Machine Learning a step further.

It’s not just learning from data but it creates new data that is similar but not identical to what it’s seen before.

It’s like a robot that not only learns recipes but can invent new dishes.

robot bender cooking

Think of it this way:

Traditional machine learning is about understanding and following recipes.

And Generative AI is like a master chef who can create new, unique dishes.

Take ChatGPT as an example.

It was trained on billions of dataset from the Internet.

This extensive knowledge allows users to do more than only recalling information.

You can:

  • generate new content ideas
  • summarise long youtube videos
  • write a creative essay.

Large Language Models (LLMs) are a specialised form of Generative AI. It focuses on understanding and generating human language.

They are programmed with rules of grammar.

And can also learn from an extensive corpus of texts.

This allows LLMs to perform tasks like:

  • Translating languages
  • Answering questions
  • Write poetry

So you can think of it this way:

AI is like the entire discipline of gardening.

Machine Learning is the gardener who learns how plants thrive in through trial and error.

Generative AI is the person who crossbreeds plants to produce new plant species.

LLMs are the landscape architects that design intricate gardens.

Each one has its role, and together, they create something pretty spectacular.

Here’s a simple diagram to see the relationship between these concepts.

Comparison diagram showing the hierarchical structure in AI and Mathematics. AI is compared to Maths, ML to Algebra, Generative AI to Calculus, and LLM to Differential Equations, illustrating the progression from general concepts to specific technologies or theories in both fields.

How is Generative AI different than traditional machine learning?

You can think of traditional machine learning as a specialist who excels in one area.

It’s like a chef who has perfected a single recipe after years of practice.

Traditional machine learning models train to perform specific tasks very well.

Whether it’s:

  • predicting election outcomes by studying years of voting patterns or
  • guessing your next favourite movie by examining your past viewing habits.

But traditional machine learning has its limits.

It’s only able to do well in its domain.

So even though it might be a chef that’s an expert in that one dish, it might not adapt well to cooking other types of dishes.

Generative AI is like a culinary genius who can cook a lot of dishes and invent new recipes.

It learns from a large, general data set.

Through this generalised training, we can use it to create specialised tasks.

This makes Generative AI versatile.

Because it can take on tasks it was never explicitly programmed to do.

For example, I use a Generative AI called Elevenlabs.

This tool creates a digital version of anyone’s voice.

I’ve seen other creators use this technology to create a specialised use case – AI song covers.

youtube screenshot of AI cover videos

How Does Generative AI Work?

Let’s first try to understand its foundational process.

It all starts with data – lots and lots of it.

Generative AI models are trained on extensive datasets.

These could be images, text or even complex pattern of user behaviours.

diagram of training data such as images, video, text and audio being used to create a generative AI model

It uses neural networks to: find patterns, structures and relationships within this data.

(Neural networks are a computer system designed to work by simulating the human brain.)

Older AI models required meticulously labeled datasets.

This meant that you needed to individually mark the data.

annotating vehicles with 3d bounding boxes

(Trust me it’s very tiring work).

But generative models often use unsupervised or semi-supervised learning techniques.

This allows generative AI models to use unlabeled data to develop foundation models.

This way, you save time because you don’t need to label data manually.

But using unlabeled data is very expensive to run and operate.

GPT-3 (used by ChatGPT) and Stable Diffusion are examples of foundation models.

They serve as a strong and versatile foundation to carry out a variety of task.

Here’s an example.

Stable Diffusion allows you to create photo-realistic images from a text command.

Beneath the friendly user interface lies a symphony of technical processes:

flowchart for generating a machine learning model

1. Data Collection and Preparation

You need to meticulously collect, clean and prepare data for training.

2. Model Selection and Training

You need to choose the appropriate neural network architecture.

This would depend on what type of Generative AI model you want to create.

This could be:

  • convolutional layers for image data
  • recurrent layers for sequential data or
  • transformers for tasks that require understanding context within large bodies of text.

3. Encoding into Latent Space

For generative tasks, where we want to create new things, our model first needs to learn the components of the new things.

To do this, they look for patterns in a lot of examples.

The more examples, the better.

Now, imagine a chef that has a magic spice rack – this is our “latent space”.

magic spice rack with a chef next to it
This was created using generative AI (Dall-E)

Instead of spices, each bottle contains a core feature of our data.

When the chef wants to create a new dish, he don’t need to start from scratch.

Instead, he takes pinches of these core features and mix them together to make something new.

In the digital world, when we want to make a new photo or generate sequence of text, the model doesn’t randomly mash things together.

It takes ‘pinches’ of these core features from the ‘magic spice rack’ to whip up a brand-new image or tune.

The latent space is the secret to making sure these creations aren’t just random, but have the same foundation as the examples they learned from.

4. Iterative Refinement

Depending on the architecture, we can use different strategies to refine the model’s output.

  • Generative Adversarial Networks (GANs):

Jason Brownlee gives a simple explanation of how GANs operate:

“The GAN model architecture involves two sub-models: a generator model for generating new examples and a discriminator model for classifying whether generated examples are real, from the domain, or fake, generated by the generator model.”

Source: https://machinelearningmastery.com/what-are-generative-adversarial-networks-gans

GAN is a discriminative model that critiques the generative model’s output.

It challenges it to produce more authentic results.

Think of a GAN as an art forger (the generator) trying to create a convincing fake painting.

And an art critic (the discriminator) trying to detect the forgery.

Over time, the forger becomes so skilled that the critic can’t tell the difference between the fake and real paintings.

painter in a room with paintings

This is what GANs do with data:

It generates new data samples (like images) that are almost indistinguishable from authentic ones.

  • Variational Autoencoder (VAEs):

” The Variational Autoencoder (VAE) architecture comprises of two key components: an encoder that maps input data to a probabilistic latent space, and a decoder that reconstructs data samples from this space. The encoder learns to approximate the posterior distribution of latent variables given the input. While the decoder aims to generate new data samples that are indistinguishable from the original dataset, by sampling from the latent space. This process is guided by the optimization of the Evidence Lower BOund (ELBO), which acts as a proxy for maximizing the data likelihood while regularizing the latent space distribution.”

VAE is a likelihood estimator that assesses how well the output captures the probability distribution of the data.

Imagine a VAE as a dream simulator.

It analyzes a set of images and learns to imagine new images that could pass as part of the original set.

It doesn’t just replicate the images it’s seen.

Instead, it learns to understand the underlying patterns and variations to dream up new, unique images.

  • Transformers:

Here’s how the founders of Transformers describe what it is:

Transformer (is a) sequence transduction model based entirely on attention, replacing the recurrent layers most commonly used in encoder-decoder architecture.”

Transformers use self-attention mechanisms to generate sequences of data (like texts).

This helps them to predict the next item in the sequence with context.

If you’re using an iPhone, you might notice that Apple tries to predict what your next word will be as you type it.

screenshot of Apple auto suggest

But sometimes these predictions don’t make any sense.

That’s because it doesn’t understand context.

Transformers solve this problem.

You can experience this with ChatGPT.

ChatGPT will produce responses that are contextually relevant to your interactions.

screenshot of conversation with ChatGPT regarding whether dogs or cats are better
Note: I did force it to choose cats.

(Even if it might not always be right).

5. Final Generation

The result of this process is the generation of new and original content.

This content reflects the patterns and structures that the model has learnt from the training dataset.

This could mean a new piece of music, a new unique copywriting or an original image.

What Can Generative AI Do?

Generative AI technology has actually been around since the 1960s.

But you might not have heard of it until November 2022 – with the release of ChatGPT to the general public.

This opened up a whole world of possibilities for everyday users like you and me.

But what exactly can Generative AI do?

How useful will it be for your daily life?

Let’s dive into a comprehensive list of Generative AI applications:

1. Transforming Text

Text-to-Text Model

You can generate text outputs by giving the model a text input.

text to text model example

Imagine you’re rushing to meet a deadline and you need to send a quick email to your colleague.

Instead of typing it out, you can use text-to-text models like ChatGPT or Claude to convey what you want.

It will use your instructions to create a well-structured email in seconds.

I’ve used this a lot and it’s a lifesaver.

Text-to-Text software: ChatGPT, Claude, Google Bard, Bing.

You can also use text-to-text Generative Artificial Intelligence to:

  • Translate texts
  • Complete texts
  • Summarise texts
  • Generate ideas
  • Rewrite or edit content
  • Extract content
  • Classify texts
  • Write code

Image-to-Text Models

Generate text with image prompts.

image to text model example

My brother was traveling to Japan in August 2023.

He couldn’t speak, write or read Japanese.

Whenever he came across a sign in Japanese, he used Google Lens (a generative AI tool).

He snaps a picture of the sign and Google Lens instantly translates the text to English.

That’s just one of the many use cases for image-to-text.

With this model, you can:

  • Convert physical documents into editable text files
  • Transcribing handwritten notes into digital text
  • Extracting street names and other text from maps for navigation apps

Image to Text Softwares: Google Lens, ChatGPT, Plugger AI

2. Bringing Images to Life

Text-to-Image Models

You can generate images by giving the model a text input.

text to image model example

I used this when I wanted to create studio-like product photos.

I tested out different image generative models that could create product photos.

I took pictures of my products and uploaded it to Pebblely.

It removed the background, let me describe in text the type of background I wanted.

And voila!

I got a high-quality product photo image.

pebblely product photoshoot before and after

With text-to-image technology, you can create:

  • Custom illustrations
  • Product mock-ups
  • Human models

It’s great for content creation.

Text-to-Image software: Stable Diffusion, Midjourney, RunwayML

Image-to-Image Models

You can generate image outputs by giving the model an image.

image to image model example

I recently used Vectorize.ai to create icons from images I generated from Midjourney.

You can also use image-to-image to:

  • Transform images
  • Enhance images (get better resolution, turn it from black & white to colour)
  • Converting real photos into different styles like cartoon or water painting.
  • Removing unwanted objects from photos

Image-to-Image software: Photoshop (Generative feature), Getimg.ai, zmo.ai

Audio-to-Image Models

You can generate image outputs by giving the model an audio.

audio to image model example

With this model, you can:

  • create images that represent your music based on its audio characteristics
  • create visuals based on the game’s soundtracks
  • transform speech into images that visually represent the speaker’s emotional state

Imagine how that could help people who have trouble understanding social cues.

Audio-to-Image software: Melobytes

3. Crafting Audio and Music

Text-to-Audio Model

You can generate audio outputs by giving the model a text-based command.

text to audio model example

I’ve always been curious how podcasters create those perfect voice overs.

It turns out, many of them use Generative AI tools like Elevenlabs to clone their voice.

This means they can generate voice overs without having to record each time.

I tried and experimented with it myself.

It was like having a twin who could do all the speaking I hated.

Using this generative AI, you can:

  • Clone your own voice
  • Generate realistic voice overs
  • Make music compositions
  • Learn language (through pronunciation and intonation)

Text-to-Audio software: Elevenlabs, veed.io

Audio-to-Audio Model

You can generate audio outputs by giving the model audio input.

audio to audio model example

You can use krisp.ai to remove background noises in your live meetings.

Or use other generative AI to:

  • Reduce the file size of audio recordings while maintaining quality.
  • Restore old or damaged audio recordings to improve quality.
  • Create a synthetic version of a person’s voice from a sample recording.

Text-to-Audio software: Krisp.ai, Adobe Podcast

Image-to-Audio Model

You can use an image to generate audio.

image to audio model example

Melobytes, for example, can use any images you upload and it will create a song to match that video.

There’s a lot of other applications of image-to-audio including:

  • Translating images into sounds to help visually impaired individuals perceive the visual content.
  • Enhancing art exhibitions with audio representations of visual artworks.
  • Translating images from camera traps into sounds for easier monitoring of wildlife.

Image-to-Audio software: Speechify

Video-to-Audio Model

Use a video to generate an audio output.

video to audio model example

This is what Youtubers use to upload their Youtube videos to podcast platforms.

You can also use video-to-audio to:

  • Create audio descriptions of video content for the visually impaired.
  • Isolate audio from videos as background sound effects.
  • Improve the audio track of a video for better quality.

4. Video and Animation

Text-to-Video Model

You can generate video outputs by giving a text-based command.

text to video model example

What if you could create whole videos by only typing out a few texts?

Now you can.

With tools like Runway ML, you can generate video scenes in seconds by typing on the keyboard.

Here are other things you can do with text-to-video:

  • Create video summaries
  • Real estate virtual tours
  • Create custom animations
  • Developing training videos for new employees based on textual training materials

Text-to-Video software: RunwayML

Image-to-Video Model

You can use an input image to generate video.

image to video model example

Imagine being able to:

  • Create a short film or advertisement from animated sequences.
  • Creating cooking tutorials from images of ingredients and cooking steps.
  • Compiling images from sports events to create highlight videos.

Image-to-Video software: RunwayML

Audio-to-Video Model

You can use an audio to generate a video.

audio to video model results

This generative AI model takes it a step further than audio-to-image.

With this you can:

  • Create visual patterns or animations that sync with music tracks.
  • Lip-sync – Match a person’s mouth movements to a spoken audio track.
  • Add audio commentary to video content, that aligns with visual cues.

Video-to-Video Model

Use a video to generate a video output.

video to video model

You can:

  • Colour correct videos.
  • Increase the resolution of a video without sacrificing quality.
  • Stabilise video footage

Video-to-Video software: Synthesia AI

5. Text and Analysis from Multimedia:

Audio-to-Text Models

You can take an audio to generate a text.

audio to text model

You can save hours on manual tasks such as transcribing and translating.

This way you can:

  • Convert audio recordings of your live meetings into text so you can refer & document it.
  • Create subtitles and captions for videos automatically.
  • Convert court proceedings, depositions, and legal consultations into written form for official records.
  • Convert voicemail messages into text for quick and easy reading.

Audio-to-Text software: Elevanlabs

Video-to-Text Models

Take a video to generate a text output.

video-to-text model

I watch a lot of podcasts on YouTube and sometimes I’m pressed for time.

Because of that I use, video summarisers like Summarize Tech to give me a note of what the podcast is talking about.

You can also use video-to-text to:

  • Converting spoken words in videos into written text for documentation.
  • Transcribe meetings, webinars, and lectures for future reference.
  • Analyse video content for research purposes.

Video-to-Text software: Speakai, Otter AI

Video-to-Image Model

Take a video to generate an image output.

video to image model flow

You can use this to:

  • Generate thumbnails for YouTube videos
  • Extract frames from old films for restoration and preservation.
  • Capture still images from traffic cameras for analysis.

Video-to-Image software: Runway ML

What are the Benefits of Generative AI?

Anyone can use Generative AI, regardless of their background.

Here’s some of the many benefits that this AI model can offer:

1. Enhance The Learning Experience

Generative AI enhances learning for all ages, from kids to adults.

For instance, Khan Academy created an AI chatbot called Khanmigo.

The chatbot uses OpenAI’s GPT model.

This fine-tuned model can help guide a students progress while learning by:

  • asking them questions
  • helping them with their problems without explicitly giving the answer
khanmigo chat screenshot
Credit: https://www.khanacademy.org/khan-labs#khanmigo

It’s like having a personal tutor 24/7 that can help them learn the way they love to learn.

This is a single example, but the applications are limitless.

For example, I’ve personally used ChatGPT to help me with learning copywriting.

With this, ChatGPT would help breakdown my approach to studying copywriting.

It shows me how I can improve the work I am writing.

And it can even set a monthly schedule of activities I can do to improve my writing skills.

2. Speed Up & Save Money on Content Creation

Whether you realise it or not, almost everyone produces some form of content everyday.

It could be the emails you send.

The messages you write.

Or the videos you produce.

Generative AI can automate almost every aspect.

If you’re a small business owner, you could save up to 20 hours per week on social media content by using ChatGPT.

If you run a Youtube channel, you can use Claude to generate video scripts.

If you have an eCommerce store, you can use Pebblely to create beautiful product photos.

All these tools save you time and money.

3. Personalised for Anyone

Since you can train Generative AI like ChatGPT on your own dataset, you can use this to create:

  • personalised marketing materials
  • build recommendation systems like Netflix, Youtube and Amazon
  • personalised videos for outreaching

Here’s a problem I faced:

I want to create marketing materials that were relevant to my target audience.

Here’s how I used Generative AI to help with this:

I surveyed my past customers and compiled all their answers in an excel.

2. I removed anything private like their names and emails.

3. I uploaded it to ChatGPT and gave it the following prompt:

Role: You are an experienced marketer in the food industry.

Objective: Your mission is to create Facebook Ads copy that will engage and provoke the user to click the ad.

Target Audience: Our target audience are women between the ages of 28-40 who have children and are health-conscious.

Details:

1. Use the data in the excel sheet to guide your copywriting. These data contain the objections, pain points and hidden desires of our target audience.

2. Make sure the copywriting is relevant and makes the target audience feel personalised.

3. Use short, relatable words that are easy to understand.

4. Use every day language and insert emojis when necessary.

5. Write 10 copy that are less than 100 words each.

Task: Create 10 Facebook Ads copy (maxmium 200 words each) that are click-bait, engaging and would attract the target audience’s attention instantly.

This is the result:

ChatGPT results based on prompt mentioned earlier

Of course, this is only an example using ChatGPT.

But you can apply this concept to almost every other Generative AI tool (Google Bard, Claude etc).

4. Data Analysis Made Easier

Machine learning in general is great and efficient at analysing data.

Tableau and Thoughtspoke are Generative AI that specialises in analytics.

ChatGPT has a built-in feature where it can create graphs, read CSV and PDF files.

You can take advantage of this feature by giving data for ChatGPT to extract conclusions on.

With these tools you can use Generative AI to:

  • Generate insights based on your data
  • Deliver predictions or forecasts based on your data
  • Automate processes
  • Predict customer behaviour
  • Detect patterns in user behaviour

5. Realistic Simulations for Planning

Generative AI is also efficient in creating simulations.

It’s able to simulate environments and scenarios for training and planning purposes.

Like in autonomous vehicle development, urban planning and gaming.

6. Versatile AI Applications

To me, what makes Generative AI powerful is its flexibility.

There are a lot of different applications of generative models across different industry.

McKinsey’s research estimates that generative AI could add almost $4.4 trillion annually.

They also predict that biggest sectors would be:

  • Customer operations
  • Marketing and sales
  • Software engineering
  • Research & development

This isn’t surprising since considering the development of Generative AI.

You can use Explainpaper to understand research papers faster.

You can use FinchatGPT to get summaries of financial reports, visualise and aggregate investment data.

And you can even use YesilHealth to check your symptoms and answer health-related questions.

While Generative AI is still in its infancy stage, I believe that it will create more use cases in the future.

What Are The Limitations of Generative AI?

Generative AI models aren’t perfect.

Dr. Andrew Ng, founder of Deeplearning.ai, highlights the following limitations:

1. Knowledge Cutoffs

Generative AI models like ChatGPT has a knowledge cutoff.

This means it was only trained with knowledge up to a certain period.

Here’s a few reasons why this happens:

A. Training Data Snapshot

Training Generative AI models require feeding it a large dataset.

This can include, text from books, websites or other sources.

This dataset is a snapshot of human knowledge up to a certain time frame.

Once the training is complete, the model’s knowledge does not extend beyond what it has learnt.

This means it won’t know if there new updates or information that is available.

B. Computational and Resource Constraints

To continuously update the model with the latest info, we would need to retrain it constantly.

This is expensive and resource intensive.

Training a wide range of data requires a lot of computational power.

This just makes it not feasible to do.

C. Quality and Stability

Since we have cutoff dates, we can predict the model’s responses.

This makes the model more stable.

If we constantly update the training data, this could lead to new biases or errors.

Since the training data could lead to variability in the model’s performance and output quality.

D. Safety and Reliability

The most obvious point:

Since there is a knowledge cutoff, developers can better control the model’s performance.

In the early stages of ChatGPT, we can see a lot of effort by the public to “hack” ChatGPT.

This user for example was able to get ChatGPT to give working Microsoft licenses.

chatgpt sharing windows key

This becomes crucial for ensuring the safety and reliability of the AI.

Especially in avoiding misinformation or harmful content.

2. Hallucinations

Hallucination is when the generative AI model creates incorrect, nonsensical or entirely fabricated information. But it presents it as if it was the truth.

There’s a few reasons why this can happen:

A. Lack of understanding:

Unlike humans, AI models can’t understand information the same way we do.

They can only predict the most probable word or phrase based on patterns in their training data.

So when it reaches a topic that it hasn’t seen enough in training, it could hallucinate the details.

B. Data Limitations

The training data might not be able to cover every single topic comprehensively.

So in areas where the data is few, the model is more prone to hallucination.

C. Complex Queries

It struggles to find a response when you ask the model complex and ambiguous questions like a riddle.

Because as I mentioned earlier, it might not exist in its training data.

This could resort the model to “make up” information.

D. Long Contexts or Instructions

If the instruction is too long, the model could lose tracks of key details.

Then it would start to generate incorrect responses.

I’ve also noticed that it will hallucinate if you give conflicting instructions.

It’s important to realise that while Generative AI is great, it still doesn’t have the “consciousness” we do.

As we continue to use these models, make sure to verify the content it produces.

3. Limited Input and Output Length

There’s a limit to the length of an instruction we can give to these AI models.

If you were to upload an entire book to ChatGPT or Claude, it would not be able to go through this.

Sometimes, even if you meet the input length, it might miss out on the key details of the content.

The output length also has a limit, although it is harder to exceed.

When this happens, the model will stop working and you’ll be forced to refresh the page.

4. Does Not Work Well With Structured Data

You can think of structured data as data that is organised and you can store it in a table.

Generative AI models are better at learning patterns in unstructured and continuous data.

Like images or text.

With structured data, it often involves complex relationships between different fields.

For example, in a database of customer transactions, there could be a relationship between:

  • Customer demography
  • Purchase history
  • Product details

These structured data requires high precision and accuracy.

Generative AI models are probabilistic in nature.

This could potentially generate data that approximates the overall distribution.

But may not capture specific, critical details accurately enough.

Structured data is also more sparse.

This means that a lot of fields could be empty or might be missing some values.

5. Bias Output & Toxicity

At the end of the day, these models are trained by humans.

And every human is subject to biases.

These biases can be projected onto the AI models because we get to determine what is “right” and “wrong”.

As a result, this can influence the training dataset we give the model.

If you ask ChatGPT, “how many genders are there?”

screenshot of a conversation with ChatGPT with the question "how many genders are there"

Some people may interpret this as a “biased” answer.

6. Potential for Misuse

Fake content like deepfakes are a big problem in the Generative AI space.

With enough facial data, anyone with a laptop can take your face and paste it onto an existing video.

The result is almost enough to fool anyone.

deepfake of Margot Robbie and Amber Heard
You can see how frighteningly real it’s starting to get

You can imagine how damaging if people used this technology to:

  • Create fake videos of you promoting a product
  • Put your face on pornographic material

Further reading

In this section, I provide you with more resources on the topic if you are looking to go deeper.

Articles, Books & Papers

Video & Courses

Conclusion

In summary, Generative AI has the ability to learn from data and generate new data.

Data that is coherent and contextually relevant.

As we look to the future, the possibilities of Generative AI continue to expand.

Do you have anymore questions?

Or did I perhaps miss something?

Let me know in the comment sections below.

  1. https://www.sciencedirect.com/science/article/pii/S2667241323000198
  2. https://www.sciencedirect.com/science/article/pii/S1472811723000289
  3. https://www.researchgate.net/publication/230708329_Generative_Artificial_Intelligence
  4. https://www.coursera.org/learn/generative-ai-for-everyone/
  5. https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier#introduction
  6. https://www.gspublishing.com/content/research/en/reports/2023/03/27/d64e052b-0f6e-45d7-967b-d7be35fabd16.html#
  7. https://www.science.org/doi/abs/10.1126/science.adh4451
  8. https://www.researchgate.net/publication/364085232_Generative_Artificial_Intelligence_Trends_and_Prospects
  9. https://www.cloudskillsboost.google/course_templates/536
  10. http://introtodeeplearning.com/
  11. https://www.nature.com/articles/s41587-020-0418-2#citeas
  12. https://machinelearningmastery.com/what-are-generative-adversarial-networks-gans/
  13. https://arxiv.org/abs/2311.03595
  14. https://blog.khanacademy.org/harnessing-ai-so-that-all-students-benefit-a-nonprofit-approach-for-equal-access/

this isn’t your typical AI newsletter.

Proven growth tactics, expert insights and case studies straight to your inbox every week. Perfect for busy, ambitious professionals. 

Converkit Form - CTA Section 1

* No spam. No fluff. Just the latest ideas & strategies to help you grow faster and smarter.