When Elon Musk said that the future of AGI is Rust, he was probably talking about the performance and efficiency of Rust. Rust can be orders of magnitude faster and smaller than Python for AI inference apps. Wasm is an ideal sandbox to run Rust applications in the cloud. In this talk, we will focus on two AI and LLM application areas that are especially relevant to Wasm and Rust.
1 To create Rust applications that run existing LLM model inference in Wasm sandboxes. We will discuss the WASI NN API for Rust and WasmEdge plugins for Tensorflow or PyTorch.
2 To create LLM plugins and end-user applications (eg Chatbot) using public and private LLM’s APIs. Compared VM or Docker based solutions, Wasm applications be easily and cheaply embedded into the LLM infrastructure.
We will discuss a variety of real-world use cases of LLM services and applications built on top of Wasm and Rust. We will also cover the current status and future evolution of related Rust API and Wasm specifications.
--------------------------------------------------------------------------------------------------------------------------------------------------------
Rust Global is the first Rust Foundation-hosted gathering dedicated to the use of Rust in global leadership settings.
Rust Global is an opportunity for technology decision-makers, business leaders, and Rust advocates to connect with, learn from, and inspire one another. Join us to discuss a more resilient, secure, and sustainable future built with the Rust programming language.
To learn more, visit the
event website.
Registration Cost: $15
How to Register: Pre-registration is required. To register for Rust Global, add it to your
WasmCon registration.