Data Golden March 13th, Google (GOOG.O) CEO Sundar Pichai announced last night that the latest multimodal large model GEMMA-3 from Open Source is designed for low cost and high performance. GEMMA-3 has four parameters: 1 billion, 4 billion, 12 billion, and 27 billion. But even with the largest 27 billion parameters, only one H100 is needed for efficient inference, which is at least 10 times less Daya Komputasi than similar models to achieve this effect, and it is currently the strongest small parameter model. According to blind test LMSYS ChatbotArena data, GEMMA-3 is second only to DeepSeek's R1-671B, higher than OpenAI's O3-MINI, Llama3-405B and other well-known models.
Konten ini hanya untuk referensi, bukan ajakan atau tawaran. Tidak ada nasihat investasi, pajak, atau hukum yang diberikan. Lihat Penafian untuk pengungkapan risiko lebih lanjut.
谷歌Sumber TerbukaGemma-3:媲美DeepSeek,Daya Komputasi暴降
Data Golden March 13th, Google (GOOG.O) CEO Sundar Pichai announced last night that the latest multimodal large model GEMMA-3 from Open Source is designed for low cost and high performance. GEMMA-3 has four parameters: 1 billion, 4 billion, 12 billion, and 27 billion. But even with the largest 27 billion parameters, only one H100 is needed for efficient inference, which is at least 10 times less Daya Komputasi than similar models to achieve this effect, and it is currently the strongest small parameter model. According to blind test LMSYS ChatbotArena data, GEMMA-3 is second only to DeepSeek's R1-671B, higher than OpenAI's O3-MINI, Llama3-405B and other well-known models.