
As we enter 2025, artificial intelligence is no longer just a tool, a passing trend, or a buzzword - it has become a transformative force reshaping how humanity works, thinks, and thrives. And it's here to stay. Over the past few years, we've explored the technical limits and vast opportunities of AI, particularly generative AI, through experimentation, rapid innovation and built enormous amount of different PoCs. However, 2025 is the year when these technologies will move from PoCs to production systems at scale. These systems will shape industries, communities, and individual working and private lives, making this the critical year to reflect on their foundations and make any needed corrective pivots.
While many companies are investing heavily in solving technical challenges like reliability, accuracy, and security, it is imperative to deepen the shift toward humans - embedding our values, addressing our needs, and prioritising healthy human-AI interactions through clear, measurable actions that guide development, deployment, and evaluation.
I will continue believing that AI has the potential to bring immense good: it can help us tackle global challenges like climate change, make healthcare more accessible, improve education, and reduce inefficiencies across industries. However, these benefits can only be fully realised through hand-in-hand collaboration between humans and AI. This partnership must be carefully reflected upon, scientifically studied, and rigorously verified to ensure that AI integrates into our daily lives in ways that are meaningful, ethical, and aligned with human values.
Who should reflect and what?
It is widely acknowledged that AI solutions must be grounded in human values, ethics, and psychology to ensure they are built for the greater good. But how do we put this into practice when ideating, designing, building, and operating AI systems? And who should be involved in this reflection process? I believe everyone has an active role to play.
In the past, much of the focus has been on the collaboration between business leaders, domain experts, and technology specialists to shape AI solutions. However, as we move into 2025, the need for deep human behavioural expertise and research is growing to better understand how AI shapes human happiness and behaviour. I know I mentioned this already at the beginning of 2023, but I was wrong. At that time, we weren't yet in a position to appreciate the importance of that expertise at scale. But today, we are. In addition, the rapid increase in AI tools and solutions is now making it easier for people to shift from passive observers to active participants in the experimentation and reflection process, with accessible technologies allowing individuals to directly engage and contribute their perspectives. Incorporating these diverse viewpoints into AI development will foster a more holistic understanding of its societal, cognitive, and psychological effects.
Reflecting on one's own AI solutions remains essential, but it is equally important to critically examine the solutions built and deployed by others. By analysing how external applications operate, their intentions, and their impacts, we can gain valuable perspectives that inform and improve the design of our own systems. These external reflections help uncover blind spots, challenge assumptions, and offer fresh insights that can enhance the development and operation of AI solutions.
Let's consider the example of Meta's AI-driven synthetic accounts, introduced already some years ago, but largely went unnoticed until the final weeks of 2024. The creation of these accounts marked a significant shift from the previous goal of identifying and removing harmful deep fake bots to the introduction of synthetic accounts designed to interact with human users.
- What was the purpose of these accounts, and who needed them?
- Were they designed to improve the well-being and happiness of their followers, or were they primarily tools for engagement and influence?
- Were they intended to foster trust in technology (which didn't work out), or did they inadvertently erode it by blurring the lines between human and AI interactions?
- How did these interactions impact users' health, sense of agency, and ability to discern authenticity?
Did Meta have answers to all these questions when they launched the accounts? Do they have answers now, as critical voices have emerged? When and how did they attempt to address these questions during the development process? Did they incorporate fast, agile interactions, real-time feedback loops, and cross-disciplinary sprints throughout the design, development, and testing phases to assess trust, impact, and address specific concerns before the product was operationalised? What did their risk analysis procedure look like? And finally, how will Meta's approach to these accounts evolve in the future?
A Shared Responsibility
I believe 2025 is an ideal year to reflect even more deeply and holistically than we have in the past. As AI continues to evolve, its future will depend on the contributions of everyone involved. By promoting cross-disciplinary collaboration and encouraging comprehensive reflection, we can ensure that AI solutions not only address technical challenges but also align with humanity's values and aspirations. Let this be the year we all embrace the responsibility of shaping AI for the better, together.
"Originally published as LinkedIn article/post."