Jaseci: An Open-Source AI-Native Language and Runtime Stack
From Agentic AI Development to Systems Research
Mar 22 (Sunday) Morning Session · ASPLOS'26 @ Pittsburgh
Venue Warhol Room, The Landing Hotel
Agentic AI development is broken. Too many tools, too many languages, too much glue code. Jaseci fixes this: an open-source language and runtime extending Python with native AI and scaling constructs. Prompt less, smile more. No boilerplate, no DevOps, no manual prompts.
Tutorial Themes
by llm() and the compiler writes the prompts. Build multi-agent systems with walkers on graphs. Zero boilerplate.Jaseci Ecosystem in Action
Jaseci Resources
Jaseci.org
Official website with documentation, guides, tutorials, and community for the Jaseci ecosystem.
Jaseci on GitHub
The core open-source AI-native language and runtime stack. Star the repo and contribute.
ASPLOS 2026
Official ASPLOS 2026 workshops and tutorials page at Pittsburgh, March 2026.
Research Papers
- MTP: A Meaning-Typed Language Abstraction for AI-Integrated Programming
- GraphMend: Code Transformations for Fixing Graph Breaks in PyTorch 2
- Prompt Less, Smile More: MTP with Semantic Engineering in Lieu of Prompt Engineering
- The Jaseci Programming Paradigm and Runtime Stack: Building Scale-Out Production Applications Easy and Fast
- Scaling Down to Scale Up: A Cost-Benefit Analysis of Replacing OpenAI's LLM with Open Source SLMs in Production
Agenda (Mar 22, 2026, ASPLOS'26)
Session 1: Introduction to Jaseci
An overview of the Jac language, Jaseci runtime, and Open Source Ecosystem, and how they address the fundamental gaps in modern AI application development.
Session 2: Agentic AI - Beyond Prompt Engineering
How Jaseci's by llm() construct uses semantic type annotations to auto-generate prompts, replacing fragile hand-crafted prompting with structured, type-safe reasoning.
Coffee Break
Session 3: Agentic AI - Building Real Workflows
Common agentic AI workflow patterns and the architectural challenges behind them. Covers how Object Spatial Programming (OSP) and by llm() work together to tackle these challenges, with hands-on exercises to put the concepts into practice.
Session 4: Research Frontiers
Kernel Forge: Automatically synthesizes and optimizes GPU kernels for PyTorch models, removing the need for any CUDA or kernel programming expertise.
GraphMend: Applies source-level code transformations to eliminate FX graph breaks in torch.compile(), preventing silent fallbacks to eager mode and ensuring models run at full compiled performance.
Organizers
Prof. Jason Mars
University of Michigan
Prof. Lingjia Tang
University of Michigan
Prof. Krisztian Flautner
University of Michigan
Jayanaka Dantanarayana
University of Michigan
Savini Kashmira
University of Michigan
Interested in attending? Let us know so we can plan accordingly and keep you updated.
Let us know if you are interested!