Abstract
Libraries and frameworks are where LLMs often break down: too much context, too many moving parts, and lots of hidden assumptions. In this talk, we’ll unpack why models struggle in this space and present our method for structuring library knowledge into digestible chunks. Using Svelte as the running example, you’ll see what goes wrong when models get it wrong, and how Tessl's approach can help them finally get it right.
Overview
Why frameworks and libraries trip up LLMs
Software engineering, among other things, requires precision in language, API patterns, and dependency versioning. But models are trained on snapshots of data that:
- Are taken in the past, meaning they have no knowledge of a framework just released;
- Contain mixed and sometimes contradictory information about the same package. For example, different versions without clear differentiation;
- Provide an unbalanced distribution of information. Some packages are well-documented, while others have very little coverage.
We’ll unpack why this matters for developers relying on AI coding tools and introduce a practical tool to help.
Tessl's method
- Structured, doc-like blueprints of a package’s API, best practices, and examples.
- They give coding agents a reliable reference point while iterating, helping them stay aligned with how a library is really meant to be used.
Svelte as a running example
- Svelte poses a particular challenge: LLMs often mix it up with other frameworks or fall back to using older Svelte versions patterns.
- With our method, the same agent can navigate the framework more effectively, producing cleaner, more accurate code.