On Constructivism in AI — Past, Present and Future

Aleksandra Hadzic
6 min readNov 20, 2021

--

Constructivism in Artificial Intelligence

Constructivism is a knowledge and learning theory that can be applied to artificial intelligence. It argues that learning, knowledge, and understanding are constructive processes that build on prior knowledge.

For example, rather than forming a single conception of the world, pieces of information are layered on top of our existing knowledge.

When it comes to constructivism in AI, there is the belief that learning or knowledge is created by constructing internal models of the world that are constantly adjusted to fit with new experiences.

Constructivism in AI affirms that machine intelligence is best realized by programming machine intelligence systems to behave like infants, with instinctive reflexes, and then gradually learning how to interact with their surroundings.

A constructivist epistemology suggests that processing mechanisms actively change the world rather than simply mirroring it in computational terms.

The world is much more than a collection of stimuli. The mind interprets new input and transforms it into meaningful and valuable representations by drawing on knowledge and information about the world.

AI that Learns From Interaction

Constructivist AI is an artificial intelligence approach that emphasizes learning and problem-solving abilities through interaction with the environment.

These two terms are used within the broad sense of “learning” as any knowledge acquisition and “problem-solving” as any task that can be broken down into sub-goals to be completed in turn.

It differs from non-constructivist approaches to AI, which focus on programming or encoding solutions to specific problems. The ability to acquire new skills by modifying existing ones is the primary advantage of constructivist approaches over non-constructivist approaches.

Constructivist AI systems can learn new skills by interacting with their surroundings, but they are still far from solving all problems that humans can. The main limitation of constructivist AI systems is their inability to deal with abstract concepts, which are only meaningful in the context of other concepts.

To put it briefly, Constructivism in AI believes that intelligence is a fundamental property of mental or neural structures. It is essentially the idea that what intelligence is (and thus what intelligence looks like) is dictated by the mind itself; intelligence occurs only when intelligence is present, rather than things, events or stimuli in the environment. Furthermore, while constructivists agree that settings do impact learning, they place a greater emphasis on internal learning mechanisms. Internalization is another term for this concept.

Knowledge as a Process for Constructing Data

Constructivism is a philosophical position that holds that knowledge of the world has no absolute foundation but rather is dependent on the conceptual framework we use to generate and interpret it. Building an artificial intelligence system based on experience rather than pure knowledge can be supported by the theory of constructivism. When it comes to designing real-world intelligent systems, this method is a good fit.

This method can be used to create learning systems based on an extended form of constructivist epistemology. This strategy entails:

(1) defining a reasonable path of experience,

(2) developing learning algorithms that can benefit from this experience

Although these steps appear similar to those involved in traditional AI techniques such as case-based reasoning, there is a significant difference. In these techniques, human experts program the conceptual framework used by the system, whereas in constructivist AI, it is originated exclusively from experience.

Advancements in Constructivist AI: Challenges and Opportunities

Constructivist AI is not purely philosophical but is also concerned with the practicalities of AI.

What distinguishes constructivist AI from other forms of AI is this: One way to answer this question is to investigate the relationship between formal symbolic reasoning and human learning. Logismos (reasoning) and mathesis (feeling) have been debated since the time of the ancient Greeks, who differentiated between the two.

Plato stated that only logismós was capable of understanding abstract concepts like justice or beauty, whereas formal mathematical reasoning was limited to concrete objects like lines and circles. Aristotle disagreed, claiming that reason could get both tangible things and abstract ideas. The debate raged into modern times, with many philosophers taking an intermediate position in which some ideas can only be grasped through formal reasoning and others through empirical observation.

Understanding Constructivism in AI

In AI, the constructivist approach has emerged as a practical approach for developing advanced knowledge-based agents that learn from scratch rather than prior knowledge.

The constructivist approach is desirable when developing autonomous intelligent systems that can operate in unfamiliar environments without prior knowledge because it provides simple procedures for learning from interaction with the environment.

Some believe the main challenge of Constructivism in AI is how to represent the knowledge being learned. In other words, there aren’t any overarching principles for representing knowledge; instead, each field must be investigated in detail.

The purpose of this article is to discuss some possibilities of constructivist representations that are known in various domains (such as physics, biology, and robotics) and may apply to other areas where inductive learning is used (e.g., medicine). The main point is that researchers in all of these domains have demonstrated that certain basic constructs (sometimes referred to as “invariants”) repeatedly appear in different problems and domains.

Constructivist representations containing these fundamental constructs can be built up into more complex representations to solve problems in new domains autonomously.

The Key to Understanding Consciousness

Constructivist AI is an attempt to apply the most important constructivist ideas in psychology and cognitive science to the development of AI.

It is a methodology for creating a model or a system capable of understanding and acting in the world, and it is based on the creation and application of models of that world.

The model construction process, which was once the foundation of all human problem-solving behaviour, is now mostly done unconsciously. The goal of constructivist AI is to make this process explicit and use it as the foundation for machine learning and skill acquisition.

The central concept of constructivist AI represents the holistic concept that is greater than the sum of its parts.

The things we learn or believe we learn about the world are more than just a collection of facts, and they are two-dimensional models with structure and content. These models represent how we see the world, understand its fundamental structure, reason about what we can do in it, classify things into kinds and types, attribute properties to things, plan actions, and so on.

Instead of the traditional approaches to AI, which are based on imitation of human behaviour, constructivism assumes that ‘intelligence’ is the ability to build mental models.

The role of AI researchers and engineers in the future is actually to provide an intelligent system with data and tools for constructing its own models rather than providing the models themselves. And this requires intuition, creativity and a lifelong learning process so that we can identify risks, mitigate them and propose proper solutions for thriving.

--

--