The rapid advancement of LLMs necessitates specialized and robust infrastructure. This presentation provides an overview of the underlying technologies and stringent networking requirements essential for modern AI backend deployments. We will explore the architectural components and computational demands driving LLM training, highlighting the pivotal role of high-performance networking. A significant focus will be placed on the intricate networking requirements for efficient GPU interconnection within data centers.

Nokia
Nokia Regional Product Line Manager
主持人 / 王彥傑
資訊局局長/趙式隆、詹婷怡、余若凡、王彥傑、郭奕豪
Jack Kwok、Achie
Steve Crocker
Edgemoor Research Institute
Tony Smith
APNIC
梁增偉
Akamai
Bastien Claeys
Nokia
Stanley Chen
Tomoki Yoshikawa
Home NOC Operators Group
Philip Paeps
Alternative Enterprises
Masataka Mawatari
JPIX
Yoshinobu Matsuzaki
IIJ
Tashi Phuntsho
FLEXOPTIX
岑育霖
RETN
Taisuke Sato
Seiko Solutions
Scott Fisher
Team Cymru
Pavel Odintsov
FastNetMon LTD