Eric Walkingshaw, an ARCS Scholar Alumnus who earned his PhD at Oregon State University, is cautiously excited about Artificial Intelligence (AI) but is very excited about his current role as a compiler engineer at Elemental Cognition.
“[The AI that] people are most excited about right now are Large Language Models (LLMs), such as Chat GPT,” he says. “LLMs are essentially language extenders. You can ask it to synthesize text, and it can do that. What’s amazing about these models is how sort of powerful and general purpose they are,” Walkingshaw explains. “They’ve built the models that can do this with massive amounts of data, like significant percentages of the internet amount of data. And the result is they are kind of amazing,” he says.
Walkingshaw encourages everyone to engage with LLMs, “it feels like you are talking to something intelligent.”
But, Walkingshaw says, LLMs are flawed. “The problem is, when you dig into the details, the details are often wrong,” he says. “Under the hood, they are sort of smooth-talking liars that are prone to going off the rails during prolonged interactions and say bizarre things.”
Enter Elemental Cognition, a company that is working to make AI more trustworthy and accurate. Walkingshaw works as a compiler engineer on the product Cogent. A "compiler" is software that enables a computer system to "understand" a new programming language by translating that language into one it already understands. Cogent aims to have domain experts, who are not programmers, write down all their domain knowledge in a way that Cogent can understand and use. The input to the program is specific.
“A domain expert will be able to start writing down, in a language that looks and reads like plain English, all the relevant concepts, facts, and rules needed to reason in their domain of expertise. The language is a special subset of English that Cogent can understand,” he explains.
While excited about the use of LLMs and AI but wary “about the high degree of harm” it can create if not used carefully, Walkingshaw sees Elemental Cognition as providing an alternate path. “It’s a path that requires a bit more investment upfront to use but significantly minimizes the potential for harm of these systems, and so is a much better foundation to build on,” he says. “They are starting from correctness first and sort of extending the capabilities of the AI with LLMs.” The other way is starting with the generality of Large Language Models and trying to constrain it to be correct.
“If you start with something correct and grow it from there, you’re more likely to have something correct at the end,” he says.
Walkingshaw was a scholar at Oregon State University in 2013, earning his PhD in electrical engineering. He did a post-doc in Germany, then returned to Oregon State as a professor for several years before returning to industry. He is content to have a position that uses his specific skills while working on the growing wave of AI.
He considered the ARCS scholar award to be “life-changing.” He and his wife were in school simultaneously, and “times were very lean.” Walkingshaw used his award to travel to conferences, but often his wife would join. He also fondly remembers his award sponsors, two Oregon members. “They were cheering for me and just so enthusiastic about it,” he said.