
Samar Fatima, RMIT University and Kok-Leong Ong, RMIT University
Artificial intelligence (AI) is becoming ever more prevalent in our lives. It’s no longer confined to certain industries or research institutions; AI is now for everyone.
It’s hard to dodge the deluge of AI content being produced, and harder yet to make sense of the many terms being thrown around. But we can’t have conversations about AI without understanding the concepts behind it.
We’ve compiled a glossary of terms we think everyone should know, if they want to keep up.
Algorithm
An algorithm is a set of instructions given to a computer to solve a problem or to perform calculations that transform data into useful information.
Alignment problem
The alignment problem refers to the discrepancy between our intended objectives for an AI system and the output it produces. A misaligned system can be advanced in performance, yet behave in a way that’s against human values. We saw an example of this in 2015 when an image-recognition algorithm used by Google Photos was found auto-tagging pictures of black people as “gorillas”.
Artificial General Intelligence (AGI)
Artificial general intelligence refers to a hypothetical point in the future where AI is expected to match (or surpass) the cognitive capabilities of humans. Most AI experts agree this will happen, but disagree on specific details such as when it will happen, and whether or not it will result in AI systems that are fully autonomous.
Artificial Neural Network (ANN)
Artificial neural networks are computer algorithms used within a branch of AI called deep learning. They’re made up of layers of interconnected nodes in a way that mimics the neural circuitry of the human brain.
Big data
Big data refers to datasets that are much more massive and complex than traditional data. These datasets, which greatly exceed the storage capacity of household computers, have helped current AI models perform with high levels of accuracy.
Big data can be characterised by four Vs: “volume” refers to the overall amount of data, “velocity” refers to how quickly the data grow, “veracity” refers to how complex the data are, and “variety” refers to the different formats the data come in.
Chinese Room
The Chinese Room thought experiment was first proposed by American philosopher John Searle in 1980. It argues a computer program, no matter how seemingly intelligent in its design, will never be conscious and will remain unable to truly understand its behaviour as a human does.
This concept often comes up in conversations about AI tools such as ChatGPT, which seem to exhibit the traits of a self-aware entity – but are actually just presenting outputs based on predictions made by the underlying model.
Deep learning
Deep learning is a category within the machine-learning branch of AI. Deep-learning systems use advanced neural networks and can process large amounts of complex data to achieve higher accuracy.
These systems perform well on relatively complex tasks and can even exhibit human-like intelligent behaviour.