So you've heard about artificial intelligence (AI) in the news and even have watched it play out in movies in the context of apocalyptic killer robots overtaking the world. Frightening? Yes. Plausible? No - at least not yet anyways. With all the hype surrounding AI, how can we decipher fact from fiction? We'll take a look at some of the key terms you need to know about AI, the history of its development, and explore how it's impacting education both now and in the future. Let's see how!
- Algorithms - sequenced computer coded instructions that outline the steps a computer must follow to complete a function or task-dependent procedure.
Algorithms are the heart of AI. You've likely heard your computer science colleagues using algorithms in their work - whether it's designing an animation, analyzing big data, inputing mathematical calculations, or processing an image. All of these tasks require instructions that the computer must follow in a predefined order. A simple analogy of this is following a recipe. It's a linear, step-by-step process that leads to predictable outcomes. As AI continues to evolve, it can be thought of as the continual refinement and efficiency of computer-directed programmed codes. AI's distinction from other computer programming is how it can be integrated with human processes, such as "visual perception, speech-recognition, decision-making, and learning" (Holmes et al., 2019, p. 17). Think of it as facial recognition, self-driving cars, personal assistants like Siri and Alexa, or language learning apps like Duolingo. All of these technologies employ AI technology via algorithms to complete tasks, deliver immediate results to our queries, and drive us to a more technologically literate society.
- Machine Learning - self-automated computer programming that involves a computer predicting novel outcomes based upon pre-existing input data (Holmes et al., 2019).
Machine learning extends beyond basic algorithms by involving a 3-step process.
- Analyze data
- Construct a conceptual model
- Take action
Machine learning allows a computer to analyze big data and observe patterns that explain nuances or relationships. With this information, the computer develops a logical framework that 'makes sense' of the data. This model is the rationale for why and how the computer undertakes a specific action. It predicts future outcomes based upon the current status of the big data. For example, future stock market trends can be anticipated as machine learning gauges patterns that point to a predictable outcome. Even Facebook has adopted machine learning to automatically identify photographs of people based upon patterns of named people through facial-recognition. As the computer takes action and adjusts to new patterns and information that recalibrate its processes, it continually refines itself which is how it 'learns.' The exponential growth of AI is attributed in large part to the improved efficiency of machine learning.
- Supervised Learning - the programmed data has already been classified or labeled; the computer generates results based upon pre-defined input data.
A simple example of this is the above example of Facebook automatically labeling new images of people using correctly labeled profile pictures as a guide. The important distinction between supervised and unsupervised learning is that supervised learning has a correct value or answer that links the data with the labels. Supervised learning is predictable, whereas unsupervised learning involves more freedom to discover hidden patterns or relationships that humans may have not detected originally (Petersson, 2021).
- Unsupervised Learning - the computer analyzes unclassified, big data to "uncover hidden patterns in the underlying structure of the data, clusters of data that can be used to classify new data" (Holmes et al., 2019, p. 19).
Unlike supervised learning, unsupervised learning does not have a 'correct' value linked to how it identifies new data. Instead, it explores hidden nuances and groups data according to relationship patterns and draws conclusions based upon the conceptual models it constructs. Holmes et al. (2019) explains how marketing agencies take advantage of unsupervised learning to target online shoppers based upon their purchasing habits or product preferences. Another example is determining fraudulent from legitimate financial transactions. In both examples, the computer becomes accustomed to patterns or perceived normalcies that predict future outcomes.
- Reinforcement Learning - a computer learns and evolves based upon immediate feedback (reinforcement) that results in a continual refinement or iterative process
Reinforcement learning is highly responsive to feedback. It adjusts and integrates feedback in real-time, leading to expedited processes that eliminate the need to run the same algorithmic process to accommodate updated data values. Instead of replicating the same process, the computer learns by itself by responding to immediate positive/negative feedback that defines how the process will be repeated. A simple example of this is how self-driving cars respond to stimuli, such as collisions or malfunctions. As the computer integrates new data values and recalibrates its operational model, it becomes more intelligent and more readily able to respond appropriately in future situations.
- Artificial Neural Networks - computing networks comprising input and output units that mirror the structure and function of the human brain's neural networks.
Artificial neural networks include interconnected layers that communicate with each other; these layers consist of the input layer, hidden intermediary layers, and the output layer. Similar to how a human brain operates, the input layer extracts stimuli from the environment, filters it through the intermediary layer which undertakes a computational action, and then produces a result via the output layer. While sophisticated systems, artificial neural networks are still considered relatively primitive compared to the higher-order human brain. The main limitation with artificial neural networks is that they can make decisions "for which the rationale is hidden and unknowable, or un-inspectable, and possibly unjust, a critical issue that is the subject of much research" (Holmes et al, 2019, pp. 21-22). Predictability is a concern that has garnered much criticism and concern surrounding AI's emergence. Further research is imperative to gain a better grasp of how artificial neural networks operate consistently and ethically.
AI's origin began in the 20th century with Sidney Pressey creating the first teaching machine during the 1920s during his tenure as a professor at Ohio State University. While considered primitive today, Pressey's teaching machine was the first to experiment with self-scoring automated machines that provided instant feedback. Pressey believed his mechanical typewriter would decrease the teacher's workload by consolidating student data and evaluating it quickly and efficiently. This is the beginning for what would later evolve into the modern Scantron.
B.F. Skinner, the father of behaviorism, expanded Pressey's model to develop his own teaching machine in 1958. Like Pressey's, his machine provided immediate reinforcement. However, his version required students to manually compose their answers instead of selecting an answer from a list of possible choices. He believed that students learn best by recalling an answer rather than simply recognizing a correct answer from a limited selection of possible answer choices. In this way, his teaching machine became a teaching tool that replicated a personal tutor. He pioneered what would later become AI intelligent tutoring systems.
Adaptive Learning began to emerge in the early 1950s with Gordon Pask developing the SAKI (self-adaptive keyboard instructor). This machine was designed for trainee keyboard operators learning how to operate a device that punched holes in cards for data processing (Holmes et al., 2019). The distinction between this machine is how it would adapt the speed with which a trainee was presented numbers. If the individual struggled inputting certain numbers, the number would reappear with a slower speed. Likewise, if the individual excelled in inputting other numbers, the number would reappear with a higher speed and with increased frequency. The "task presented to a learner was adapted to the learner's individual performance, which was represented in a continuously changing probabilistic student model" (Holmes et al, 2019, pp. 26-27).
During the 1960s and 1970s, SAKI evolved into Computer-Aided Instruction (CAI). CAI became popular during this time. One such CAI was PLATO (programmed logic for automatic teaching operations) that was developed at the University of Illinois. This machine was innovative by allowing students to access learning content on a mainframe computer that was accessed via remote terminals. This machine also introduced some new educational tools that are common today such as "user forums, email, instant messaging, remote screen-sharing, and multiplayer games" (Holmes et al., 2019, pp. 27-28).
Other CAI systems were developed by Stanford University and IBM to introduce in local elementary schools while other systems like Brigham Young University's CAI was designed to teach college freshmen introductory courses. While CAIs were popular during the 1960s and 1970s, few were adopted due to the logistics such as cost and accessibility of software required to run the machine. During the 1980s, the emergence of personal computers boomed and CAIs became more popular, yet they lacked adaptivity. This was a chronic problem until the emergence of AI in CAI with the introduction of Jaime Carbonell's SCHOLAR. This system was different from earlier CAI models because of the interactive dialogue it provided students based upon student responses. It was first explored with teaching geography, but it has now expanded into multiple subject areas and is now by a new name, Intelligent Tutoring Systems (ITS).
ITS have become increasingly modernized compared to earlier CAI-based systems. ITS incorporate 3 models: domain, pedagogical, and learner models that collectively provide instant feedback and facilitate unique learning pathways appropriate for the student's level. CAI systems only provided domain and pedagogical models, meaning that user-adaptability was restricted.
AI-driven ITS include a learner or student model that is "a representation of the hypothesized knowledge of the student" (Holmes et al., 2019, p. 33). This means that it creates learner profiles that understand the student's interaction with the machine, their perceived difficulty level of content, misconceptions, and emotional state while using the machine. With the trifecta of domain, pedagogical, and learner models, ITS systems have transformed education, operating in multiple subject areas and disciplines. It's only a matter of time before ITS once again redefine themselves into even more powerful learning facilitators.
It has been argued that the United States is falling grossly short of leveraging AI to its full potential. Some individuals claim that AI will continue to become more embedded in society, requiring individuals to reskill or upskill to avoid being affected by AI claiming jobs that can now be automated, replacing human labor. An estimated 70% of jobs will be automated in office administration, production, and transportation industries (Muro et al., 2019). Social inequalities may become exacerbated as AI alters human livelihoods and favors the more skilled, adept workers who can leverage AI effectively in their careers. Those who cannot adapt well will be left behind. The same can be said for young learners in schools who may not adjust well to AI. Young learners require intensive one-on-one personal instruction and social-emotional support that an AI system cannot replicate. Introducing AI as a substitute may be plausible in the near future, but the challenges this will bring forth will be extensive.
The area that I see AI being most impactful is the ability to create personalized learning pathways. Students deserve personalized instruction to meet their needs in real-time, and as educators, it is our responsibility to advocate for our students and equip them with the technical skills necessary to thrive in a technically literate society. AI will become more engrained and palpable in our daily lives, and students must be able to adapt with the times and master new technologies that they will eventually encounter in their careers and personal lives.
AI's influence in learning and development contexts remains to be seen in many respects. It is expected to improve a teacher's work-life balance by improving automated processes like grading and progress monitoring, leaving more time for teachers to do what they do best: teach. However, AI may disrupt how teachers teach, meaning that current teaching practices and pedagogy are no longer relevant. The entire process of training student teachers, issuing teaching degrees, and professional development sessions will shift. Classroom design and pedagogical approaches will be up for reimagining in the age of AI. "For many the use of AI in education remains a mystery" (Holmes et al., 2019, p. 10).
Higher education needs to be aware of AI's disruptive influence. At Carisbrooke University, we need to be cognizant and exercise proactiveness instead of reactiveness. Here is a brief 3-step action plan that can get us started.
1) Professional Development Sessions - consult with the university instructional design team and IT department to design faculty and staff meetings that educate university stakeholders on how AI is being implemented in higher education. Introduce guest speakers from peer institutions and companies that are using AI to impart insight and practical tips of how to integrate AI effectively. Actively seek out pilot studies or formal programs introducing AI in higher education. Use these models as case studies to present to faculty and staff as a means to introduce them to AI and its multifaceted uses in higher education.
2) Reskill & Upskill Workers - foster the uniquely human qualities that machines cannot automate. Encourage faculty and staff to take advantage of micro-credentials or professional development courses designed to enhance their current skill sets to avoid being replaced by AI technologies. Promote a lifelong learning mentality that will encourage faculty and staff to adapt to new technology. Communicate the importance of human qualities, like empathy, emotion, personal connections, and relationship development that may be immune to AI's ability to automate tasks. Convey that partnering with AI is not to be feared, but that preparation and proactiveness is critical to remain cutting-edge and relevant in today's digital world.
3) Expand AI in Traditional Teaching Curriculum - include courses in the Education Department that instruct student teachers on how to integrate educational technology tools, such as ITS to personalize student learning. Update current curriculum to reflect how AI impacts pedagogical approaches and explore how they can be used to facilitate current teaching practices. Explore different technology tools, apps, and programs that employ AI. Have student teachers introduce these new technologies in their classrooms to gauge how well ITS impacted student learning. Continue the dialogue in focus group interviews and classroom discussions with educational technology leaders to understand how AI will continue to transform education, both now and in the future.
As you must realize, AI is changing the way we think and operate. It's already becoming an embedded norm, such that we no longer perceive AI as being AI. With this in mind, it's important to remain cognizant of emerging technologies. Hopefully, this presentation has provided you with a brief overview of AI's role in education and its implications for the future. If nothing else, I hope it reassured you that AI is not restricted to apocalyptic killer robots that media and movies continue to portray. It's so much more than that. Rest assured that AI has the potential for good - that is if it's leveraged appropriately. And if it's not... well, let's end this on a good note and tackle that dilemma another day.
References
Holmes, W., Bialik, M., & Fadel, C. (2019). Artificial intelligence in education: Promises and implications for teaching and learning. Center for Curriculum Redesign.
Muro, M., Maxim, R., & Whiton, J. (2019, January 24). Automation and artificial intelligence: How machines are affecting people and places. Brookings. https://www.brookings.edu/research/automation-and-artificial-intelligence-how-machines-affect-people-and-places/
Petersson, D. (2021, March). Supervised learning. TechTarget. https://searchenterpriseai.techtarget.com/definition/supervised-learning
Image Citations
Aneddotica Magazine. (2013, July 11). Pressey's teaching machine [Image]. https://aneddoticamagazine.com/wp-content/uploads/2013/07/pressey.jpg
Sereikaite, I. (2019, July). Figure 1: Basic architecture of an ITS [Image]. https://d3i71xaburhd42.cloudfront.net/403c0f91fba1399e9b7a15c5fbea60ce5f28eabb/2-Figure1-1.png
University of Illinois. (1970). PLATO III student terminal [Image]. https://archon.library.illinois.edu/index.php?p=digitallibrary/getfile&id=9655
University of Illinois. (1972). PLATO IV student terminal [Image]. https://archives.library.illinois.edu/erec/University%20Archives/1505050/BrownBag/PlatoTerm.jpg
Vargas, J. S. (2010, September). B.F. Skinner's teaching machine [Image]. https://dl.acm.org/cms/attachment/79f30797-b669-486b-bd70-e7698ed5feb4/100921_vargas_image1.jpg
Watters, A. (2015, March 28). SAKI adaptive teaching machine [Image]. https://s3.amazonaws.com/hackedu/2018-08-08-saki.jpg
Credits:
Created with images by viarami - "mockup typewriter word" • geralt - "definition books library" • jarmoluk - "old books book old" • Pexels - "laptop notebook cellphone" • geralt - "business idea planning business plan" • jmexclusives - "business meeting meeting business" • TheDigitalArtist - "connection hand human" • geralt - "finger hand contact" • geralt - "problem solution help"