A recent article in The New York Times by Gary Marcus argues AI is an industry lost on the road to progress. He says to reach human-like intelligence, it needs a top-down approach — like the physics community took with CERN to create the Large Hadron Collider — instead of relying on today’s approaches. Marcus states that existing organizations, like the Elon Musk-led OpenAI and the Partnership for AI (in which the ACLU, Allen Institute, Human Rights Watch, Google, Apple, Amazon, Microsoft and OpenAI are all represented), are too small to be effective, and that large companies armed with the data sets and algorithms to make real headway are too focused on ad optimization to worry themselves with real advancement.
The issue is not the pace of progress, which is moving incredibly fast. The issue is runaway expectations about AI.
It’s true that there is a long way to go before we have AI capable of tackling challenges that require implicit knowledge, the way a human can pick up a guitar and figure out how to play, or listen to a foreign language and learn to speak with no formal classes. But it’s not up to OpenAI and Partnership for AI to focus computer science on solving this problem. They serve as important meeting places for a community that has more than just technical equations to solve — they tackle other important issues like privacy, ethics and public knowledge, each of which could actually get AI stuck in a rut if not addressed.
But AI is not stuck: It is making measurable gains in areas like natural language processing and object classification, surpassing past benchmarks to push toward 90 percent proficiency. The growth through programs like Imagnet and countless others have also shown marked progress in a short time. One day, in the not too distant future, as these disparate AI fields of research and study progress at the current pace, they will likely merge. Then we will have AI agents that are on their way to capable perception without being fed data, like Marcus desires. However, incremental advances, apparent through the slew of consumer products like Alexa, Siri and Google Home, are coming because of competition within industry, not in spite of it.
That’s because for AI to truly advance, competition needs to fuel innovation, not top-down bureaucracy, like Marcus suggests. It’s hard to imagine computer science successfully pivoting to physics’ CERN model — not because it wasn’t wildly successful. It was, and physics deserves credit for that. But, there are so many questions left to answer in computer science, it could never mirror physics’ hard-fought landscape where it needs the entire braintrust of the industry to solve its complex mysteries that serve as outliers to the grand unified theory. It takes serious dollars to approach the question marks surrounding how quantum mechanics fits into this picture — finding the Higgs Boson cost an estimated $13.25 billion. And, perhaps even more importantly, the answers to the questions left in physics can’t be easily monetized.
By comparison, computer science innovation has clearance prices — no machine or storage costs in the tens of billions. So, innovation becomes intrinsically decentralized. It would take a short time to come up with a list of all the interdisciplinary spaces there are left to research until we get anywhere near Marcus’ desire for sentient beings.
Despite no unified research program focusing on one grand mystery, small groups of researchers are making a lot of headway on their own, continuing AI’s competition-friendly history. The pace of change is happening so quickly that researchers are clamoring to be the first to get their ideas into the public. This has borne a culture where competition is so fierce that the simple wait to get an article in a peer-reviewed journal is seen as too long by some. Instead, researchers are leveraging tools like Cornell’s ArXiv to promote their innovations at their natural pace.