I want instant dumb AI

AI have superhuman abilities in multiple narrow areas, and OpenAI a boasting of reaching Ph.D level reasoning to be just around the corner. But I really don’t need that. What I want instead of it answering my questions, I want it to understand the intent of my question. Similar to how a good friend knows why you ask a certain question in a particular way. Let me expand.

As I was doing my post-doc at Princeton, my wife would get the comment: “Oh, your husband must be very smart” — to which she correctly and tartly replied “Yes, he knows a lot about very little”. The problem with frontier models today is not this. No, they “know” a lot about a lot of stuff. But they never have a good understanding of why I ask a particular question.

We humans have a way of communicating that is heavily influenced by context, the exact same words can mean many different things depending on the context. And in professional life our conversations are iterative, often in the form of “Ok, when you say this what do you really mean because what I understand / hear is this …” We feed of each other to get to the root cause, the core intent, understanding the problem to be solved in an iterative fashion. In this iterative communication we build a small mirror of another persons worldview. This is why we do onboarding in companies, to share with the new employee, our worldview, our way of doing things, solving problems, running process. This is why research papers are often really boring to read, as you have to understand the premise, the background, the methodology before you can skip to the conclusion.

AI today is happy to give very elaborate answers, often with bullet points, code examples, ending on a note — do you want me to elaborate or give alternatives. Who speaks like that? I am not sure I would want such a person as a colleague. Instead I want an AI that asks back, challenges my assumptions, don’t need pedantic input and prompt engineering. I want AI that is about understanding my intent and my goals, not answering my questions.

You may say that I am using it wrong, and I need to get “Git Gud” at using AI. But I don’t think we should be designing systems that requires us to change our communication form, instead we should be designing systems that are more like us. There is a reason we communicate as we do. We do so because we care about understanding each other, and when we have good communication want to move towards a common goal.