Amazon Reportedly Struggling to Build AI Capabilities to Make Alexa Smarter

newyhub
5 Min Read


Amazon introduced its voice-first virtual assistant Alexa to the world in November 2014. The technology’s name is said to be inspired by Star Trek’s computer system onboard the Starship Empire and underscored CEO Jeff Bezos’ ambition of creating a conversational and intelligent assistant. However, a report claims that despite a tech demo last year showing a contextually aware Alexa, it is nowhere close to being integrated with artificial intelligence (AI) to be smarter. A former Amazon employee who was working on Alexa AI has also highlighted knowledge silos and fragmented organisation structures being detrimental to Alexa’s advancement.

Former Amazon employee highlights issues with improving Alexa

In a long post on X (formerly known as Twitter), Mihail Eric, who worked as Amazon’s Senior Machine Learning Scientist at Alexa AI between 2019 and 2021, shared his experience of working in the company and the challenges he faced. He also explained why Alexa was a project doomed for failure.

Highlighting the “bad technical process” in the company, Eric said the company had a very fragmented organisational structure, which meant getting data to conduct training for large language models (LLM). “It would take weeks to get access to any internal data for analysis or experiments. Data was poorly annotated. Documentation was either nonexistent or stale,” he added.

He also said that different teams were working on identical issues, which created an atmosphere of internal competition which was not productive. Further, he found managers were not interested in collaborating on projects that did not reward them.

In the post, Eric shared several instances where the organisation structure and policies came in the way of developing “an Amazon ChatGPT (well before ChatGPT was released).”

Amazon employees reportedly highlight Alexa’s struggles

Fortune published a long report where it cited more than a dozen unnamed Amazon employees to highlight the issues the company is facing in integrating AI capabilities into the virtual assistant. One particular issue that surfaced was that Alexa’s current capabilities make it harder to integrate a modern tech stack.

Reportedly, Alexa is trained to respond in “utterances”, which essentially means that it was created to respond to a user command and announce that it was running the requested command (or that it could not understand the user). As a result, Alexa was not programmed for back-and-forth conversation.

The publication cited a former Amazon machine learning scientist, who explained that the model also resulted in Amazon customers learning a more efficient way of communicating with the virtual assistant, which was to give a short prompt for the action. This created another problem. Despite having hundreds of millions of users who actively speak with Alexa on a daily basis, the data is suited for utterance training and not conversations. This has reportedly created a big data gap in the organisation.

Further, the report claims that Alexa is a cost centre for Amazon, and the company loses billions each year as the technology cannot be monetised yet. Meanwhile, Amazon Web Services (AWS) has an AI assistant dubbed Amazon Q which is offered to specific enterprises as an add-on and generates money. Over the years, the Amazon Q division has seen more investments and even integration with Anthropic’s Claude AI model. However, Alexa’s AI team was not given access to Claude due to data privacy concerns.

When Fortune reached out to Amazon, a spokesperson reportedly denied the claims and said these details provided by employees were dated and did not reflect the division’s current state of LLM development. While that may be true, the more conversational Alexa seen at the tech demo last year is yet to be released to the public by the tech giant.


Affiliate links may be automatically generated – see our ethics statement for details.

//
Share This Article
Leave a comment