Generative AI and Product Development: What We’ve Learned & What’s Ahead
Tim Flem · November 30, 2023
In a recent Executive Order from the White House on AI, President Biden encouraged the industry to “shape AI’s potential to transform education by creating resources to support educators deploying AI-enabled educational tools, such as personalized tutoring in schools.” We’ve known since research completed in the 1980s that personalized tutoring is significantly more effective than “factory models of education,” and I have spent a good portion of my career considering how human-centered product design and technology can help fulfill the promise of personalized learning.
The time is now. This fall, I’ve watched with great interest and excitement as our teams at Macmillan Learning have tested personalized tutoring with thousands of students in courses in our digital courseware product, Achieve.
Early results indicate that AI-enhanced personalized tutoring positively impacts student engagement and progress, especially at times when they need help with assignments. Without giving answers away, the AI-powered tutor uses socratic-style questions delivered in a chatbot to help guide students step-by-step to the correct conclusion. Importantly, we have seen that whereas students sometimes feel self-conscious asking their instructor or teaching assistants for help, they are open and persistent with the AI tutor, asking questions repeatedly until they gain better understanding.
Faculty have also responded positively, noting that the AI tutor is available late at night or generally when faculty and teaching assistants are not available to answer questions. Because our AI tutor is grounded in the specific Achieve course content from the instructor’s assignment, faculty have reported confidence that their students are receiving better assistance than other online options. While it’s still a bit early for us to understand the efficacy of the AI tutor, we have enough early positive indicators that we are eager to now understand the impact it has on learning. Please stay tuned!
AI tutoring is just the beginning of the opportunities in front of us. Imagine a learning environment in which teachers have a learning assistant that knows each student's preferences and levels of preparedness, paces lessons accordingly, and provides timely interventions when needed. Imagine that instructors can intrinsically engage students by framing knowledge acquisition and skill-building in ways that acknowledge the student’s curiosity, their lived experience, cultural background, and personal goals/mission–all while ensuring that students make progress against faculty course outcomes. The promise of AI is that we can support both instructors and students in making learning more deeply personal, accessible, and engaging.
But how do we get there? It may be easy to surmise that with the breathtakingly fast evolution of Generative AI technology the promise of personalized learning assistants will be delivered very soon by large language models. But, human-centered products never result purely from technological advancement–instead, we must roll up our sleeves and intentionally create products that students and teachers find valuable and trustworthy.
One of the most challenging and important problems to solve with AI-enhanced personalized learning is the management and protection of student data privacy and security. In surveys Macmillan Learning has conducted this fall, 63% of students indicated that they have concerns about how data is used, stored, and generated by AI applications and companies. In our fall 2023 AI tutor tests, we have been firmly grounded in the AI safety and ethics principles and processes that we developed with the help of two advisory boards of experts. Good intentions are important, but they’re not enough.
We have been, and will continue, actively monitoring to ensure that data, privacy, and security measures are working as intended. We will continually work with experts to stay current on quickly evolving tools and best practices, and importantly, to implement auditing processes on the AI products/features we’re developing. We are resolute that AI tutors and assistants in our Achieve and iClicker platforms will align with our rigorous human-centered AI ethical principles and processes.
We’ve also heard concerns that the use of AI may create disparities in education, as economically disadvantaged individuals might not have access to the same resources. There's also the risk of AI systems inheriting biases present in their training data, which can also perpetuate disparities. We believe that it's essential to approach the integration of AI in education with an awareness of these challenges and a commitment to use these tools ethically, inclusively, and equitably.
Again, good intentions are important, but not enough. We are working with AI bias experts to determine if we can proactively detect and, when possible, mitigate bias in training data. This fall, we have conducted research specifically with students and faculty at minority-serving institutions to ensure that we acknowledge the needs, questions, and concerns around AI from traditionally underrepresented populations.
As if these substantial challenges are not enough, we also have new AI software infrastructure, QA testing, and monitoring projects to tackle. Every workday feels more full and fulfilling than the day before, but I’ve truly never been more energized in my career in education. I share President Biden’s observation that we have not recently had such a tangible opportunity to fundamentally transform education–and to do so in a way that benefits every learner.