Despite billions invested, 80% of AI solutions fail to deliver promised value or achieve long-term adoption, and 95% of generative AI pilot projects fail to generate measurable value. This catastrophe signals a fundamental design failure, not a technical one. We are creating systems that are “technologically brilliant… but, humanly irrelevant”
This course moves beyond surface-level design and tackles the systemic flaw: the industry’s reliance on Forecasting. Forecasting uses the past and present to predict the future, excelling at optimising what is probable but failing to perceive what is preferable. This results in systems that are “statistically correct, but humanly wrong”, automating the user’s existing dysfunction.
Participants will learn how to integrate foresight and the powerful methodology of backcasting. Backcasting turns possibilities into plans by starting from a desirable future state — habits, priorities, or skills — and working backward to define today’s design decisions.
Building on my doctoral research, I’ll introduce behavior change frameworks built on Backcasting. These models show that designing AI isn’t just about creating functionality — it’s about projecting human change. Attendees will learn how to shift AI from being accurate to being relevant — systems that understand human change, not just predict behavior.


