Hi Devon! This stood out to me at the end: "What I suspect the near future of AI will look like is better and better specialized “thinking” tools for single tasks, which frees up human beings to focus on deciding what to build."
This is a really insightful point, and I see it borne out in my own field (I do open source intelligence, also known as OSINT). This is one of those fields that sits between human creativity and big data tools. There are so many OSINT use cases, workflows, and tools, each with a very specific purpose... and they get outdated fast as new tools rise up to take their place. It's a field with a lot of innovation and growth (about 90% of the CIA's intelligence now comes from OSINT rather than clandestine collection methods).
And even though it's like a constantly evolving jigsaw puzzle of research tools and techniques, there is ALWAYS an overambitious engineering team that wants to create The One OSINT Tool To Rule Them All. I've seen this happen once or twice. It never works, of course, because the thing they are trying to replicate is the element of human creativity and ingenuity that knows how to leverage dozens of tools to pivot from one data point to another in search of the hidden bone. A much better use of engineering for OSINT is to take a much more narrowly focused problem and solve that to perfection, and then just continually update it... and give it an API or some other way to plug it into a dashboard or another integrative tool, if you want.
So those lines reminded me of that -- I think what humans excel at is exactly what you said: the understanding of context, not just of immediate patterns and data. And even for humans, that takes time to develop.
One thing I've always though of as the biggest problem for machine intelligence is Motivation. Or more specifically, Internal motivation. Take a Calculator as a simplified example. Absolutely amazing at math. But it has no reason to sit around solving math problems on its own. It waits, completely idle, until a human decides he needs it to calculate a 20% tip on a $20 bill. Which it does without complaint.
Humans, on the other hand, have external pressures. Like the need to eat. Like figuring out how to get something to eat, how to pay for it, how to get some woman to accompany him to the restaurant, what he can to to ascertain HER external pressures in a way that brings them together. And figuring out why she thought he was dumb for doing a simple percentage on a full-function scientific calculator.
AI's don't have any goals that don't come from us. They don't even have any answers that don't come from us, they just kind of average them out without really understanding. They don't know how things work in the real world, which is why they can't draw a buckle even with all the models and schematics on the net.
Hi Devon! This stood out to me at the end: "What I suspect the near future of AI will look like is better and better specialized “thinking” tools for single tasks, which frees up human beings to focus on deciding what to build."
This is a really insightful point, and I see it borne out in my own field (I do open source intelligence, also known as OSINT). This is one of those fields that sits between human creativity and big data tools. There are so many OSINT use cases, workflows, and tools, each with a very specific purpose... and they get outdated fast as new tools rise up to take their place. It's a field with a lot of innovation and growth (about 90% of the CIA's intelligence now comes from OSINT rather than clandestine collection methods).
And even though it's like a constantly evolving jigsaw puzzle of research tools and techniques, there is ALWAYS an overambitious engineering team that wants to create The One OSINT Tool To Rule Them All. I've seen this happen once or twice. It never works, of course, because the thing they are trying to replicate is the element of human creativity and ingenuity that knows how to leverage dozens of tools to pivot from one data point to another in search of the hidden bone. A much better use of engineering for OSINT is to take a much more narrowly focused problem and solve that to perfection, and then just continually update it... and give it an API or some other way to plug it into a dashboard or another integrative tool, if you want.
So those lines reminded me of that -- I think what humans excel at is exactly what you said: the understanding of context, not just of immediate patterns and data. And even for humans, that takes time to develop.
One thing I've always though of as the biggest problem for machine intelligence is Motivation. Or more specifically, Internal motivation. Take a Calculator as a simplified example. Absolutely amazing at math. But it has no reason to sit around solving math problems on its own. It waits, completely idle, until a human decides he needs it to calculate a 20% tip on a $20 bill. Which it does without complaint.
Humans, on the other hand, have external pressures. Like the need to eat. Like figuring out how to get something to eat, how to pay for it, how to get some woman to accompany him to the restaurant, what he can to to ascertain HER external pressures in a way that brings them together. And figuring out why she thought he was dumb for doing a simple percentage on a full-function scientific calculator.
AI's don't have any goals that don't come from us. They don't even have any answers that don't come from us, they just kind of average them out without really understanding. They don't know how things work in the real world, which is why they can't draw a buckle even with all the models and schematics on the net.