AI and decision making
In November 2022 I felt genuinely discouraged. ChatGPT was released to the public.
It wrote rather mediocre code, but it was already clear that it would quickly learn to do it much better.
What future will I have personally?
Since then, I experimented a lot with AI and came to the conclusion that AI cannot make architectural decisions, the most important part of an architect’s job.
AI does not take responsibility for its decisions, so it can generate an infinite number of solutions.
That made me happy. The most important and the most interesting part of my work still belongs to humans.
But a few days ago, something happened that made me sad again.
I watched a Dota match between OpenAI and the OG team in 2017. At that time, OG were the best Dota team in the world.
OpenAI’s bots totally dominated humans in a very complex game with high dynamics. It looked like Michael Jordan playing basketball against kindergarten kids.
So it turns out that AI is actually very capable of making effective decisions. But then why, eight years later, is AI still so bad at this?
I studied how OpenAI trained their bots, and it became even more depressing.
Generative AI is capable not only of thinking, but also of making effective decisions. It just needs to be trained on narrowly specialized cases.
Current AI models are trained on general cases, which is why they perform poorly in highly specialized ones.
It turns out that the disappearance of humans from architecture is simply a matter of time and of AI vendors being interested enough to tackle this.
So what are we going to do about it, humans?


