|Almost a year ago I made some predictions on what AI would bring for 2025, you can find it here. In short I predicted that:
- AI will disrupt software development
- Retail and e-commerce would see even more hyper-personalization
- Social media will see even more AI generated content
- AI would start to improve AI (self-research)
- AI would start to normalize, and the hype cycle wane
So how did I actually fare?
Personally I have been using Claude code and other LLMs extensively in 2025 for boilerplate code, exploration of ideas and much faster prototyping. We are now at a point where non-technical people can create their MVP almost solely with prompting. And we are there where the said MVPs are riddled with security flaws and should not be deployed in production.
I greatly underestimated the inertia of e-commerce. At the beginning of the year I said e-commerce for skincare had not moved significantly in a decade, and this year would be it. I was wrong. Woefully wrong. What became very clear, also from personal experience in 2025, is that skincare companies do not think like tech companies, and are stuck in incremental gains, chasing efficiencies, and better margins. They are not yet ready for completely rethinking consumer experience, product positioning, and how retail, online search, discovery and purchase comes together. This might very well be become a problem as agentic commerce starts ramping up.
Social media – honestly this is just in a sad state. I did not spend much time on social media in 2025, and is actively trying to keep my kids off it as much as possible. Depending on which numbers you trust somewhere between 10-40% of social content is now AI-generated. It all makes sense, the attention economy for eyeballs is drilling now into seconds of attention spans, and the more shots you have on goal the greater the chance that something sticks. Estimates cite an average of 4-5 hours spent by teens on social media every day. That is wasted youth and wasted potential, only amplified by what AI can bring to the table.
AI improving AI – yes at least in academic papers. Maybe the most interesting articles I found was on the Darwin Godel Machine and Research Agents. I anticipated more research in this field than what I saw in 2025. Needless to say, I still think this is really a pivotal area of research that can fundamentally change the trajectory of AI and AI solution spaces. Where I see this as a new direction of research is when we figure out how to close the loop, such that the AI can make small perturbations to a physical system and measure the outcomes – think cellular assays, material science etc. This closed loop of in-silico predictions, physical analog signal and reinforcement from observation is extremely powerful.
Normalization of AI — not the least. We are still seeing one big splash after another about new capabilities of LLMs which is more and more becoming synonymous with AI. Yes we have seen incremental gains, and a few truly remarkable models (nano banana for images) in 2025, but we are still far from any AGI. I guess Altman and Musk have a year to catch up to their own predictions. What I personally experienced in 2025 is that for most companies AI is still very much a talk and powerpoint exercise. It is difficult to go from a strategic position or roadmap into what actually needs to be done Monday morning. Part of the reason is that AI is still relatively new, and few in corporate positions have actually built with AI – making experience and expertise in short supply.
So what surprised me in 2025
The gap — the gap between what is in the headlines, and what is the actual reality on the ground for many companies. There are certainly frontrunners, but there is an increasing gap in knowledge and expertise. CEOs and leadership teams are under pressure from their boards to do something with AI. But few really know what to do with AI. So there is doing without targets, pilots without scale, and fatigue and lack of adoption from the workforce. I had honestly expected we would have seen more maturity of the field in general in 2025, but AI continues to be a different beast than standard IT projects, and standard change management.
Another thing that surprised me, and where I was very sceptical going into 2025 was agents. Having seen a few benchmarks on multi-agent systems and their failure rates, I was convinced that it was not yet prime time for any agentic workflows at scale. And then I built one. Specifically for data cleaning, which you can read more details about here. My take-aways from this build, and continued refinement, is that agents are still stupid and requires very careful prompting, very narrow solutions spaces, and strict guidance in terms of orchestration – but then something magically happens. It actually works rather well. Now add in human-in-the-loop at critical decision points and you start to see a proper augmentation of human capability. I am really excited for what agentic systems will start to look like once we fix memory and proper attention to details in memory. I guess the transformer insight of attention is all you need, now needs to extend into instructions, memory and task handling as well.
What will 2026 bring
That my friends is something I will write in start of 2026. But I can share that I hope auto-encoders will start to make a comeback in other areas than image generation as I view this as an extremely strong vectorization method that can generalize to many different data types. I think we will see more on memory systems, and attention in memory architectures. And I am very excited for multi-agent / multi-human systems which is still in its infancy.