This week, Charles Fadel joins us for a discussion on what human learners and curriculum designers have to do to stay on pace with machines.
The Global Search for Education (GSE) is a regular contributor to the Edmodo Blog. Authored by C.M. Rubin, GSE brings together distinguished thought leaders in education and innovation from around the world to explore the key learning issues faced by today’s nations. Look for a new post every week and join the Global Search for Education Community on Edmodo to share your perspectives with their editorial staff.
Computers learning from big data have created a new source of knowledge for society.
Computers can already tell us many things, including who to date and what stocks to buy. And given the speed at which they are being developed, how long will it be before computers can write our high school essays or speak any foreign language, or even drive us to school?
Charles Fadel is the founder of the Center for Curriculum Redesign and author of Four-Dimensional Education: The Competencies Learners Need to Succeed. He just taught the first-ever class at Harvard Graduate School of Education on “Machine Learning + Human Learning”.
He joins us in The Global Search for Education to discuss how learners will keep up with the machines.
Charles, one third of all marriages start online. Automated trading programs have replaced 60–70% of human trading. An algorithm may soon be able to call 911 and possibly save our lives. What does this mean for education?
First of all, let’s establish some boundaries about what Machine Learning/AI can and cannot do — at least at this stage. We have seen tremendous advances in games such as chess, Go and poker, fields such as speech and handwriting recognition and synthesis, music composition, etc. However, as amazing as that is, they represent so-called “bounded problems” where the rules are clear and limited, even if the solution space is wide.
So AI is already superior in repetitive and predictive tasks, tasks that hinge on computational power, classifying huge amounts of data and inputs, making decisions based on concrete rules — with fewer biases than humans, but still induced by algorithms and data sets chosen by humans.
Whereas humans are better at experiencing authentic emotions and building relationships, formulating questions and explanations across scales and sources, deciding how to use limited resources across dimensions strategically (including which tasks machines should be doing and what data to give them), making products and results usable for humans and communicating about them, making decisions according to abstract values.
As to what this means for education, it implies that we should be changing the goals of education to focus on deeper learning: Relevance of what is taught, to build motivation, and personalization of the What and How; Versatility, to create “Renaissance humans”, which brings robustness to face whatever life throws at us; Transfer, insuring that what we learn in the narrow confines of schools, translates into actionability in real-life situations.
Amusingly, AI’s successes of late use “deep learning” algorithms, and in education circles we have been talking about “deeper learning,” so it is really “deep + deeper learning” together!
We may soon owe our jobs to algorithms. Companies are using them to select job applicants. How long is it until an algorithm determines who’s accepted at Harvard? What does this all mean for education?
Companies and universities have been using quantitative and qualitative criteria for a long time, in screening candidates: SAT scores, GPAs, pay scales, brand of former employers, etc. all of which are used to decide on a candidate’s fit. There is a lot left to human decisions, and we generally think that humans are infallible. The reality is that the processes are fraught with imperfections: judges award harsher sentences in late morning sessions, teachers grade more poorly the first few and last essays, doctors are unable to keep up with all the advances in their field, etc. I would welcome a day where the algorithms handle the tedium, and the humans make the wise choices unburdened. Of course, this implies that we’d be wise at not letting the algorithms dictate decisions as we see with police cases throughout the country as the algorithms have encoded existing zip code biases…
Nevertheless, employers and universities alike are looking at well-rounded globally literate applicants, capable of not only mastering modern Knowledge such as engineering and entrepreneurship, social sciences and information literacy, but also are Skilled: creative, critical thinkers who are communicative and collaborative; display Character qualities: mindfulness, curiosity, courage, resilience, ethics and leadership; adapt and learn how to learn via Meta-Learning abilities with growth mindset and metacognition.
The speed at which machine learning is improving is daunting. Does Machine Learning even belong in the lab? Shouldn’t everyone in society understand how these machines determine their facts? What does this mean for education?
Ray Dalio, Bridgewater hedge fund billionaire, stated in the Financial Times:
“We’re headed for a world where you’re either going to be able to write algorithms and speak that language, or be replaced by algorithms…” and I agree. We are going to see the emergence of a “priesthood class” of those who are capable of designing and using algorithms vs those who will live with their consequences, so it behooves education systems to ensure that *everyone* is numerate enough and algorithmically capable. All fields are becoming quantitative, with the possible exception of Philosophy; Biology in Darwin times used to be descriptive and now it is mostly analytical. Further, this implies the renewed importance of Ethics as a course of study for everyone.
The knowledge accumulated by algorithms on any given task or domain will soon dwarf the knowledge scientists have accumulated over centuries. Are Machine Learning Computer Scientists better than human scientists given that they can look at much more data and analyze it faster than any human scientist ever could?
As described earlier, the algorithms are better than humans when the data sets are clear and clean, and the application narrow. They do undoubtedly accelerate our progress, for instance via speeding up genomics. However, to state that they are broadly better than human scientists is an unjustifiable stretch at this stage, it is like saying that because computers are faster than humans at calculating, they are better Mathematicians. We work in symbiosis, and increasingly so — as Augmented Humans.
As to how these capabilities might evolve in the future, it is of course impossible to tell, but with billions of dollars being invested around the world, AI is becoming embedded in any and all applications, as is computing at large. It is becoming as ubiquitous and invisible as microprocessors in car-breaking systems for instance.
Next up is the ultimate algorithm — one that is capable of learning anything from data — who’s ready for that?
No one is, and if and when this happens, it will be the ultimate challenge for humanity. However, we are far, far off according to the best experts in the field. We have seen tremendous bottom-up progress, but we are missing several breakthroughs to reach Artificial Generalized Intelligence (AGI).
Of course, as discussed above, we do not need AGI to witness major disruptions! We will be experiencing “death by a thousand cuts” even with low-level capabilities — for instance, a bar code reader or a RF ID tag already automated jobs away, with zero intelligence.
This brings us back to the strategy of nurturing Versatility, like a Swiss army knife: it may not be the best tool for any single job, but it is a wide base to draw from as the need arises, and can be sharpened as needed during one’s life.
That said, humanity is facing a multitude of problems such as global warming, financial instability, dictators and populists, inequities, etc., which makes my computer scientist cousin quip that “we should be a lot more concerned about natural stupidity than of artificial intelligence”. There is something to ponder here indeed.