Two individuals familiar with the situation who spoke to Reuters said that Amazon (AMZN.O.) is spending millions on training an ambitious large language model (LLM) in the hopes that it will be able to compete with elite models like OpenAI and Alphabet (GOOGL.O.).
According to the individuals, the model, dubbed “Olympus,” contains 2 trillion parameters, possibly one of the most significant models being trained. One of the most excellent models currently available, OpenAI’s GPT-4 model, is said to have one trillion parameters.
The individuals talked under the condition of anonymity because the project’s specifics were still under wraps. Amazon said it would not comment. On Tuesday, The Information published a story on the project name. Former Alexa CEO Rohit Prasad, who now directly reports to CEO Andy Jassy, leads the team.
As Amazon’s senior scientist for artificial general intelligence (AGI), Prasad brought in scientists from the Amazon science team and those working on Alexa AI to train models, bringing specialized resources to the company’s AI initiatives.
Titan is one of the smaller models that Amazon has previously trained. Additionally, it has teamed with businesses developing AI models, such as Anthropic and AI21 Labs, to make them available to users of Amazon Web Services (AWS). According to those who know the situation, Amazon thinks that having in-house models might increase the appeal of its products on AWS, where business clients seek out high-performing models. However, there is no set date for the new model’s release.
The fundamental technology of AI tools that produce replies that resemble those of a human being is called a lattice lambda (LLM).
Considering the amount of processing power needed, training larger AI models is more costly. During a call for results in April, officials from Amazon said that the firm would be investing more in generative AI and LLMs while reducing its use of transportation and fulfillment in its retail division.
Comment Template