• July 18, 2024

The AI processor company supported by Sam Altman hires a former Apple executive

Jean-Didier Allegrucci Joins Rain AI for Hardware Development Work.

AI hardware processor developer Rain AI, backed by OpenAI's Sam Altman and investment banks, has hired former Apple chip executive Jean-Didier Allegrucci to lead its hardware engineering. This high-profile recruitment indicates that Rain AI has serious plans for its processors.

Jean-Didier Allegrucci has not yet updated his LinkedIn profile, having worked in Apple's System on Chip (SoC) division for over 17 years since June 2007, and supervised the development of more than 30 processors used in iPhones, Macs, iPads, and Apple Watches. According to a blog post by Rain AI, Allegrucci played a crucial role in establishing Apple's world-class SoC development team, overseeing areas such as SoC methodology, architecture, design, integration, and verification, making his experience extremely valuable to Rain AI. Prior to joining Apple, J-D Allegrucci worked at Vivante and ATI Technologies, both developers of graphics processing units.

"We are thrilled to have a hardware leader like J-D Allegrucci overseeing our chip work," said Rain AI CEO William Passo. "Our novel Compute-in-Memory (CIM) technology will help unleash the true potential of today's generative AI models and bring us closer to running the fastest, most cost-effective, and most advanced AI models anywhere.

At Rain AI, Jean-Didier Allegrucci will collaborate with Rain AI's Chief Architect Amin Firoozshahian, who transitioned from Meta Platforms after five years. This partnership combines deep industry experience and innovative thinking to drive the company's ambitious goals. However, it will still take a considerable amount of time for Amin Firoozshahian and Jean-Didier Allegrucci to build their first System on Chip at Rain AI, a process that typically takes many years.

Rain AI focuses on Compute-in-Memory technology, which processes data at the storage location, mimicking the human brain. Compared to traditional AI processors, it promises to significantly improve energy efficiency. Currently, there is no mass-produced Compute-in-Memory technology on the market, while Nvidia's H100, B100/B200, and AMD's Instinct MI300X are the main application products, but these GPU processors require a large amount of memory to work effectively.

Earlier this month, Rain AI licensed Andes Technology's AX45MPV RISC-V vector processor for ACE/COPILOT instruction customization and cooperated with Andes' Custom Computing Business Unit (CCBU) to accelerate the development of its Compute-in-Memory generative AI solutions. This collaboration aims to enhance Rain AI's product roadmap and provide scalable AI solutions before the beginning of 2025.Considering that developing a complex processor from scratch usually takes time, and Rain AI is entrusting Andes to help build its first SoC by the beginning of 2025, it seems that the processor development led by Jean-Didier Allegrucci will need at least a few more years.

The Startup's Memory Computing Strategy

With the boom of artificial intelligence (AI), the power resources as the core component of AI infrastructure are attracting more and more attention. Undoubtedly, AI is a high-energy-consuming field, and data centers and supercomputing centers can be called "electricity-guzzling beasts". Some experts estimate that by 2027, the annual electricity consumption of the AI industry may be between 85 and 134 terawatt-hours, which is almost equivalent to the annual electricity demand of a country like the Netherlands. A previous report from the International Energy Agency (IEA) also pointed out that due to the demand for artificial intelligence and cryptocurrency, the electricity consumption of data centers will increase significantly in the near future.

Many industry leaders have also warned of the potential energy crisis that AI may face. In January of this year, OpenAI CEO Sam Altman acknowledged that the AI industry is facing an energy crisis. He warned that the future of AI requires an energy breakthrough because the electricity consumed by AI will far exceed people's expectations. Tesla CEO Musk also said at a conference at the end of February that the shortage of chips may have passed, but AI and electric vehicles are expanding so rapidly that the world will face a tight supply of electricity and transformers next year. NVIDIA CEO Huang Renxun even said bluntly: "The end of AI is photovoltaic and energy storage! If we only consider computing, we need to burn the energy of 14 Earths, and super AI will become a bottomless pit of power demand."

A key reason is that AI requires a large amount of data, and the process of data transmission between storage chips and processors consumes a lot of electricity. Therefore, for at least ten years, researchers have been trying to save electricity by manufacturing chips that can process data where data is stored. This process is usually called "memory computing."

Memory computing still faces technical challenges and has just come out of the research stage. With the high energy consumption of AI raising serious doubts about its economic feasibility and environmental impact, technologies that can improve the energy efficiency of AI may bring huge returns. This makes memory computing an increasingly exciting topic. Major chip manufacturers such as TSMC, Intel, and Samsung Electronics are all researching memory computing. Individuals such as OpenAI CEO, companies like Microsoft, and many government-affiliated entities have invested in startups engaged in this technology.

It is still uncertain whether this technology will become an important part of the future of AI. Generally, memory computing chips are very sensitive to environmental factors such as temperature changes, which can lead to computational errors. Startups are researching various methods to improve this. However, replacing chips with new technologies is often very expensive, and customers are usually hesitant unless they are confident in significant improvements. Startups also have to convince customers that the benefits of new technology are worth the risk.

At present, memory computing startups have not yet started on the most difficult part of AI computing, that is, training new models, which is mainly handled by AI chips designed by companies like NVIDIA. Memory computing startups do not seem to be planning to compete directly with NVIDIA. Their goal is to build their business on inference, that is, to use existing models to receive prompts and output content. Inference is not as complex as training, but its scale is also very large. This means that chips specifically designed to improve inference efficiency may have a promising market.

Memory computing companies are still exploring the best uses for their products. Axelera, a memory computing startup based in the Netherlands, is targeting computer vision applications in automotive and data centers. Supporters of Mythic, a memory computing startup based in Austin, Texas, believe that memory computing is an ideal choice for AI security cameras and other applications in the short term, but they ultimately hope they can be used to train AI models.

Comment