Skip to Main Content

Introduction to Artificial Intelligence: Pitfalls of AI

Banner

Banner reading Artificial Intelligence

Pitfalls of Current AI

Generative AI tools learn from training data created by people. So they can pick up the biases and stereotypes in that data. If some groups are under-represented in the training data, it will show in the AI's results. Marginalised groups can be hurt most by this bias.

So you should think critically about what these AI tools give you, just like with any other source. Get a balanced view by using different resources like books and articles, not just the AI.

Generative AI can be useful but also risky because their training data isn't perfect. Do thorough research using different sources to get the best information and avoid bias. Check the AI's outputs carefully instead of just accepting them. This helps lower the problems with bias while still letting you benefit from AI tools.

Tools like ChatGPT seem intelligent, but don't treat them as reliable sources. Their answers can't always be verified, so check outputs using trusted websites or books.

You can't see the exact training data for these AI systems. Their responses might rely on weak research or made up "facts" shared as truth.

The information AI generates is usually too broad and lacks specifics needed for university studies.

With AI getting very popular, false content spreads rapidly. Tools now make convincing fake photos, audio and videos too.

Sharpen critical thinking to evaluate if AI information seems accurate and reasonable. Use good judgement on whether to trust and use their outputs.

As generative AI systems are trained on vast datasets scraped from the internet, it's important to be mindful of what information you provide them. Their models are built from sources ranging from news and academic papers to forums and websites. They are even trained on false data to recognise misinformation.

With these models gaining more unfiltered access to the web and incorporating their own outputs as training data, you must be cautious about what you input. Any personal details like your name or institution you provide may become part of the dataset used to teach the AI system. It's wise to avoid submitting identifiable information to generative models.

Additionally, consider the impact of the content you request the AI to produce, as it could potentially be added to its training data. While these tools offer tremendous benefits, it's vital to use them ethically by being selective in the prompts you submit. With greater awareness and care, students can utilise generative AI responsibly.

Most AI have no concept of disability or accessibility (don’t understand difference)

  • Steam does not take accessibility as a datapoint
  • Netflix recommendation system doesn’t take captions/audio description ability into account
  • Reddit recommending image posts to those who cannot see them

Marketing and analytics systems don’t account for disability

  • Those with unmet accessibility needs can be locked out of data collection entirely
  • Assistive tech can make data collection inaccurate (e.g. most analyse mouse movements etc.)

Video-based systems fail people with disabilities

  • Eye movement is an inaccurate metric for many people
  • Use of micro-expressions disadvantage those with physical challenges or who are not neurotypical.
  • Inability to accurately transcribe speech disadvantages those with physical challenges

Images described by AI causes bias:

  • Describes men and women differently
  • Misgenders many individuals – works with stereotypes
  • Fails to correctly describe ethnicities – can misidentify some people of colour as not even people
  • Over-identification of objects present in training data – will describe things that are not in the image

Voice recognition

  • Makes twice as many errors understanding marginalised accents or speech patterns
  • Unable to understand stutters
  • Locks people with disabilities out of telephone support and voice assistants (e.g., booking doctor appointment)

As powerful generative AI systems become more prevalent, fair access to these technologies remains an pressing concern, especially for university students. Many of these systems are locked behind paywalls and expensive subscriptions. This severely limits access for less privileged students who cannot afford the fees. Even at institutions with site-wide licenses, usage allowances can constrain students' ability to fully utilise AI for their learning and research.

By working to expand access through cost-reduction programs and scholarships, while also prioritising diversity and representation, universities can help ensure all students, regardless of background, have equal opportunities to harness advanced AI. More dialogue and initiatives focusing on AI ethics are imperative as these technologies become further entrenched in academia.

Due to current issues with unfair access, some departments may have rules on which GAITs you can use to combat any potential unfair advantages. It is always important and best practice to check with your department before using AI to aid an assessment.

"Shockingly Stupid" AI

AI's lack of 'Common Sense', April 2023