You sparked an idea that I started working on today. It's more like `make` than `terraform` but will share principals from both.
---
The idea is that convo files will serve as decencies of target outputs and those outputs could be anything from React components to generated images or videos.
---
This should make for a declarative way of defining generated application and content in a way that is repeatable and easy to modify.
---
I'll implement the same caching strategy that `make` uses to minimize the number of tokens consumed as changes to convo files are made.
Hi everybody, I'm Scott, the creator of Convo-Lang. I created Convo-Lang to solve a lot of my personal needs while building AI applications.
Convo-Lang originally started off as a prompt templating and conversation state management system. It gave me a way to load a prompt template into a chat interface and reuse the same code to handle sending messages between the user and an LLM. This was in the early days of OpenAI when DaVinci was the top model.
As Convo-Lang grow in complexity I created a VSCode extension for syntax highlighting to make templates easier to read and write. And as new patterns like RAG, JSON Mode and tool calling hit the scene I added support for them. Before long I had a pretty decent framework that was easy to integrate into TypeScript applications and solved most of my AI needs.
As I built more applications that used tool calling I realized that I was writing less TypeScript, and a good amount of the TypeScript I as writing was basic callback functions called by tools the LLM decided to invoke. At that point I realized if I created a simple scripting language that could do basic things like make an HTTP requests I could build the majority of my agents purely in Convo-Lang and encapsulate all of its logic a single file.
I found the idea of single file that encapsulated an agent in a simple text file very appealing, and then I did as I do. I ignore all of my other responsibilities as a developer for the next few days and built a thing \(ᵔᵕᵔ)/
After those few sleepless nights I had a full fledge programming language and a runtime and CLI that could execute it. It's been about a year and a half since then and I've continued to improve and refine the language.
I very much agree with you. I wanted to minimal scripting language that worked with LLMs and had as few abstractions as possible.
I'm actually working on a system that uses Convo-Lang scripts as form "sub-agents" that are controls by a master Convo-Lang script.
And regarding your "maybe renderable" comment, Convo-Lang scripts are parsed and stored in memory as a set of message objects similar to a DOM tree. The ConversationView in the @convo-lang/convo-lang-react NPM package uses the message objects to render a conversation as a chat view and can be extended to render custom components based on tags / metadata that is attached to the messages of the conversation.
Its both a library and a language. You can use it directly in TypeScript and Javascript using the `convo` tagged template literal function from the @convo-lang/convo-lang NPM package.
Here is an example of using in TypeScript:
``` ts
import {convo} from "@convo-lang/convo-lang"
const categorizeMessage=convo`
> define
UserMessage = struct(
sentiment: enum("happy" "sad" "mad" "neutral")
type: enum("support-request" "complaint" "compliment" "other")
# A an array of possible solutions for a support-request or complaint
suggestedSolutions?: array(string)
# The users message verbatim
userMessage: string
)
@json UserMessage
> user
Categorize the following user message:
<user-message>
${userMessage}
</user-message>
`
console.log(categorizeMessage)
```
And for a userMessage that looks something like:
----
My Jackhawk 9000 broke in half when I was trying to cut the top of my 67 Hemi. This thing is a piece of crap. I want my money back!!!
----
The return JSON object would look like:
``` json
{
"sentiment": "mad",
"type": "complaint",
"suggestedSolutions": [
"Offer a full refund to the original payment method",
"Provide a free replacement unit under warranty",
"Issue a prepaid return shipping label to retrieve the broken item",
"Offer store credit if a refund is not preferred",
"Escalate to warranty/support team for expedited resolution"
],
"userMessage": "My Jackhawk 9000 broke in half when I was trying to cut the top of my 67 Hemi. This thing is a piece of crap. I want my money back!!!"
Sometimes I feel like an LLM . I takes a little getting used to, but that is the same for any new language. And the Convo-Lang syntax highlighter helps to.
The triple questions marks (???) are used to enclose natural language that is evaluated by the LLM and is considered an inline-prompt since it is evaluated inline within a function / tool call. I wanted there to be a very clear delineation between the deterministic code that is executed by the Convo-Lang interpreter and the natural language that is evaluated by the LLM. I also wanted there to be as little need for escape characters as possible.
The content in the parentheses following the triple question marks is the header of the inline-prompt and consists of modifiers that control the context and response format of the LLM.
Here is a breakdown of the header of the first inline-prompt: (+ boolean /m last:3 task:Inspecting message)
----
- modifier: +
- name: Continue conversation
- description: Includes all previous messages of the current conversation as context
----
- modifier: /m
- name: Moderator Tag
- description: Wraps the content of the prompt in a <moderator> xml tag and injects instruction into the system describing how to handle moderator tags
----
- modifier: last:{number}
- name: Select Last
- description: Discards all but the last three messages from the current conversation when used with the (+) modifier
----
- modifier: task:{string}
- name: Task Description
- description: Used by UI components to display a message to the user describing what the LLM is doing.
Its an interpreted language. The interpreter and parser are written in TypeScript so it does use Javascript at runtime but its not a transpiled language that is just converted to Javascript.
The Convo-Lang CLI allows you to run .convo files directly on the command line or you can embed the language directly into a TypeScript or Javascript applications using the @convo-lang/convo-lang NPM package. You can also use the Convo-Lang VSCode and Cursor extensions to execute prompt directly in your editor.
The Convo-Lang runtime also provides state management for on-going conversations and handles the transport of messages to and from LLM providers. And the @convo-lang/convo-lang-react NPM packages provides a set of UI components for building chat interfaces and generated images.
---
The idea is that convo files will serve as decencies of target outputs and those outputs could be anything from React components to generated images or videos.
---
This should make for a declarative way of defining generated application and content in a way that is repeatable and easy to modify.
---
I'll implement the same caching strategy that `make` uses to minimize the number of tokens consumed as changes to convo files are made.
---
Anybody have any thoughts or suggestions?