Group aims to create open, efficient, and sustainable platforms for scalable AI
SANTA CLARA, Calif., Oct. 17, 2023 /PRNewswire/ -- Several companies in the AI industry announced a new consortium today aimed at making AI platforms more open, efficient, and sustainable. The initial companies include Ampere which leads the effort, along with Cerebras Systems, Furiosa, Graphcore, Kalray, Kinara, Luminous, Neuchips, Rebellions and Sapeon. Additional participants are expected to join in the coming months.
The AI Platform Alliance was formed specifically to promote better collaboration and openness when it comes to AI and comes at a pivotal moment not just for the technology industry, but for the world at large. The recent explosion of AI has created unprecedented demand for compute power to train and run these workloads. While AI training requires large amounts of compute up front, AI inferencing can require up to 10x more total compute over time, creating an even larger problem as the scale of AI usage increases. One of the goals of the AI Platform Alliance is to increase the power efficiency and cost efficiency of hardware for AI to deliver better total performance than with GPUs.
Because AI solutions can be complex to implement, the AI Platform Alliance will work together to validate joint AI solutions that provide a better alternative than the GPU-based status quo. By developing these solutions as a community, this group will accelerate the pace of AI innovation by making AI platforms more open and transparent, by increasing the efficiency of AI to solve real-world problems, and delivering sustainable infrastructure at scale that is environmentally friendly and socially responsible.
The AI Platform Alliance is open today to AI companies that are building hardware solutions and are looking to change the status quo. Companies interested in joining can access more information and apply at www.platformalliance.ai.
More information from each member company can be found below.
About Ampere
Ampere is a modern semiconductor company designing the future of cloud computing with the world's first Cloud Native Processors. Built for the sustainable Cloud with the highest performance and best performance per watt, Ampere processors accelerate the delivery of all cloud computing applications. Ampere Cloud Native Processors provide industry-leading cloud performance, power efficiency and scalability. For more information visit Ampere Computing.
About Cerebras Systems
Cerebras Systems is a team of pioneering deep learning researchers, computer architects, and solutions specialists of all types. We have come together to bring generative AI to enterprises and organizations of all sizes around the world. Our flagship product, the CS-2 system, powered by WSE-2, the world's largest and fastest AI processor, makes training large models simple and easy, by avoiding the complexity of distributed computing. Our software tools simplify the deployment and training process, providing deep insights and ensuring best in class accuracy. Through our team of world-class ML researchers and practitioners who bring decades of experience developing and deploying the most advanced AI models, we help our customers stay on the cutting edge of AI. Cerebras solutions are available in the cloud, through the Cerebras AI Model Studio or on premise. For further information, visit https://www.cerebras.net.
About Furiosa
FuriosaAI is creating next-generation NPU (neural processing unit) products to help you unlock the next frontier of AI deployment. Our inference-focused NPU products will reduce your costs and energy consumption so you can innovate without constraint.
Established in South Korea by former AMD, Samsung, and Qualcomm engineers, FuriosaAI has now more than 100 employees, including leaders and advisors with decades of experience at Meta AI, Western Digital, Sun Microsystems, Groq, and Intel. The FuriosaAI team is located in Asia, North America, and Europe, giving the company global reach across the industry.
About Graphcore
Graphcore compute systems are accelerating the AI revolution. Powered by the groundbreaking Intelligence Processing Unit (IPU), Graphcore delivers leading-edge AI performance with unprecedented efficiency. IPUs in the cloud are used around the world by organisations building their intelligent compute capabilities, including AI-centric startups, large multi-national corporations and both public and private research institutions. Graphcore is backed by some of the world's leading investors and has attracted more than $700m of funding. The company is based in Bristol, UK, with offices across Europe, Asia and North America. Graphcore.ai
About Kalray
Kalray is a leading provider of hardware and software technologies and solutions for high-performance, data-centric computing markets, from cloud to edge.
Kalray provides a full range of products to enable smarter, more efficient, and energy-wise data-intensive applications and infrastructures. Its offers include its unique patented DPU (Data Processing Unit) processors and acceleration cards as well as its leading-edge software-defined storage and data management offers. Separated or in combination, Kalray's high-performance solutions allow its customers to improve the efficiency of data centers or design the best solutions in fast-growing sectors such as AI, Media & Entertainment, Life Sciences, Scientific Research, Edge Computing, Automotive and others.
Founded in 2008 as a spin-off of the well-known French CEA research lab, with corporate and financial investors such as Alliance Venture (Renault-Nissan-Mitsubishi), NXP Semiconductors or Bpifrance, Kalray is dedicated through technology, expertise, and passion to offer more: more for a smart world, more for the planet, more for customers and developers. www.kalrayinc.com.
About Kinara
Kinara is a leader in edge AI acceleration, building a cost and energy efficient edge AI inference platform supported by comprehensive software tools. Designed to enable smart applications across retail, medical, industry 4.0, automotive, and smart cities market segments, Kinara's AI processors, modules, and software can be found at the heart of the AI industry's most exciting and influential innovations. Led by Silicon Valley veterans and a world class development team in India, Kinara envisions a world of exceptional customer experiences, better manufacturing efficiency, and greater safety for all.
About Luminous
Luminous Computing, founded in 2018, designs and manufactures hardware for demanding Generative AI Inference applications. The Company's proprietary compute and memory architecture enables high bandwidth access to multi-Terabyte banks of DDR memory. Luminous Gen 1 AI inference cards offer native PyTorch integration as well as compute and memory bandwidth in-line with or better than competing hardware. Critically, by leveraging its custom architecture, Luminous is able to accomplish this without using High Bandwidth Memory. Near future products from Luminous include its Gen 1.X card with 2 TB of memory and Gen 2 card with networking for large-scale applications. Luminous is backed by Gigafund, Gates Frontier, Neo, Alumni Ventures, and Era, among others.
About Neuchips
NEUCHIPS Corp. is an application-specific compute solution provider based in Hsinchu, Taiwan. Founded by a team of veteran IC design experts in 2019, NEUCHIPS's mission is "Smarten AI computing through innovated IC design to make Intelligence Everywhere." NEUCHIPS management and R&D team has decades of experience in top IC design houses and holds 22 patents in signal processing, neural network, and circuits. As an OCP community member, NEUCHIPS devotes itself to provide the most cost-effective AI inference accelerators for best TCO (Total Cost of Ownership).
About Rebellions
Rebellions Inc. is a South Korea-based AI Silicon startup, founded in 2020 by former chip designers, software developers, and product managers from IBM, Morgan Stanley, SpaceX, Lunit etc. The company's aim is to develop an AI accelerator with maximum flexibility while incorporating insights from the mobile space to reduce power consumption. Rebellions has commercialized its ATOM (Fastest server-class inference accelerator) with KT's datacenter in Korea. Backed by KT (Korea's No.1 Datacenter company), Temasek's Pavilion Capital and Korean top-tier VCs, Rebellions has launched two products with TSMC 7nm and Samsung 5nm EUV. With LLM/Gen AI lineup prepared, Rebellions will launch the next-gen product, REBEL, for LLM acceleration.
About Sapeon
SAPEON is an independent corporation targeting the global market with SK Telecom's self-developed AI semiconductor SAPEON is the first result of cooperation among the three companies of SK Telecom's SK ICT Alliance with SK Square and SK Hynix. For global business, SAPEON's headquarters is a US corporation, SAPEON Inc. in Santa Clara, California, the heart of Silicon Valley. SAPEON Korea, a Korean company, oversees businesses in Korea and Asia. The SAPEON chip is the first Korean non-memory semiconductor for data centers that executes large-scale calculations necessary for realizing AI services at high speed and low power. For more information about SAPEON and its products, visit https://www.sapeon.com/, LinkedIn (sapeon), Facebook (SAPEON.Korea), Instagram (sapeonkorea), and YouTube (sapeon).
Contact:
Alexa Korkos
Ampere Computing
press@amperecomputing.com
SOURCE Ampere
WANT YOUR COMPANY'S NEWS FEATURED ON PRNEWSWIRE.COM?
Newsrooms &
Influencers
Digital Media
Outlets
Journalists
Opted In
Share this article