ASPLOS 2026 Tutorial

Jaseci: An Open-Source AI-Native Language and Runtime Stack

From Agentic AI Development to Systems Research

Mar 22 (Sunday) Morning Session · ASPLOS'26 @ Pittsburgh

Venue Warhol Room, The Landing Hotel

Agentic AI development is broken. Too many tools, too many languages, too much glue code. Jaseci fixes this: an open-source language and runtime extending Python with native AI and scaling constructs. Prompt less, smile more. No boilerplate, no DevOps, no manual prompts.

Explore Jaseci.org

Tutorial Themes

01
Theme 1
Hands-On Agentic AI Development
For developers and students
Write by llm() and the compiler writes the prompts. Build multi-agent systems with walkers on graphs. Zero boilerplate.
02
Theme 2
Research the Jaseci Stack Enables
For students and researchers
Agentic AI: Native language constructs for multi-agent systems
Auto-Deploy & Scale: Laptop to Kubernetes, zero config. Auto monolith-to-microservice
ML Compilers: Python-transpiling frontend that fixes PyTorch 2 FX graph breaks

Jaseci Ecosystem in Action

100K+
Monthly PyPI Downloads
10×
Code Reduction
0
Prompts Required

Jaseci Resources

Agenda (Mar 22, 2026, ASPLOS'26)

9:00 – 9:30

Session 1: Introduction to Jaseci

An overview of the Jac language, Jaseci runtime, and Open Source Ecosystem, and how they address the fundamental gaps in modern AI application development.

9:30 – 10:00

Session 2: Agentic AI - Beyond Prompt Engineering

How Jaseci's by llm() construct uses semantic type annotations to auto-generate prompts, replacing fragile hand-crafted prompting with structured, type-safe reasoning.

10:00 – 10:30

Coffee Break

10:30 – 11:45

Session 3: Agentic AI - Building Real Workflows

Common agentic AI workflow patterns and the architectural challenges behind them. Covers how Object Spatial Programming (OSP) and by llm() work together to tackle these challenges, with hands-on exercises to put the concepts into practice.

11:45 – 12:30

Session 4: Research Frontiers

Kernel Forge: Automatically synthesizes and optimizes GPU kernels for PyTorch models, removing the need for any CUDA or kernel programming expertise.

GraphMend: Applies source-level code transformations to eliminate FX graph breaks in torch.compile(), preventing silent fallbacks to eager mode and ensuring models run at full compiled performance.

Organizers

Jayanaka Dantanarayana

Jayanaka Dantanarayana

University of Michigan

Savini Kashmira

Savini Kashmira

University of Michigan

Interested in attending? Let us know so we can plan accordingly and keep you updated.

Let us know if you are interested!