Hacker Newsnew | past | comments | ask | show | jobs | submit | hartem_'s commentslogin

Declarative DSL is a really interesting approach, especially since you’re exposing it directly to the users. There are some applications where throwing the dice in production by having LLM as part of the runtime is not an option.


Yes! Clearly the introduction of LLMs into the mix raises the problem of throwing dice. The point of view we chose is: how to orchestrate the collaboration between AI, Software and people? With our aim to have repeatable workflows, this drove us away from building autonomous agents and towards a place where the software is in command of the orchestration. Then the Humans and AI can discuss "what you want to do" and have software run it and use AI where it's needed.



Bardeen AI| ML Researcher/Engineer | SF onsite | Full-time

https://bardeen.ai We're hiring a Machine Learning researchers and engineers

JD: https://docs.google.com/document/d/1umypCfLxmb77olw4UAqz7co_...

Email to ml@bardeen.ai


Can someone please explain why does this work? Did they agree what everyone would be playing? I am not a musician, but it seems like tempos and time signatures do not match. Or is the whole point that they didn't agree to anything upfront and it still somehow works? Are they all actually playing in B-flat?


To answer your question, let's consider the ways in which the videos do _not_ work together:

- They're not playing in the same tempo

- They're not playing the same note lengths

- They're not necessarily playing notes in the same chords

It's somewhat analogous to sitting down at a piano, holding down the sustain pedal (so that all the notes played ring out for a long time), and just playing notes in the same key. (Say, all the white keys.)

It "works" because the notes are in the same key. But note that we're not making music with any kind of groove or consistent beat. It's more like an ethereal, improvised holding out of a single chord.

TL;DR – "Or is the whole point that they didn't agree to anything upfront and it still somehow works?" Yes :)


Even shorter TL;DR: It's Youtube wind chimes.


Just use the black keys.


Pentatonic scale :)


We launched LHC@home 2.0 at CERN around the time LHC itself was getting launched https://lhcathome.cern.ch/lhcathome/. The project is still going and there are thousands of people from all over contributing.



Come work at Bardeen! email me artem@ if you're interested

On a more serious note, we have built a DSL and an engine for executing automations inside the browser. Where possible we connect natively to the apps we're integrating with (like Calendar or Notion), where it's not possible (like LinkedIn) we use browser capabilities to interact or get data. We use a GPT model to transform the description of automation to our DSL, we then verify it, typecheck it, fill in the gaps where possible and present it to the user. If the user likes it they can save it and start using it. Happy to share more details if it's interesting.


It's similar to javascript, but not exactly identical. Here is an example that sends a summary of calendar event two minutes before it starts (uses gmail, openai and slack):

``` function (recipient) {\n when: __0 = GoogleCalendar.when_next_event_is_in(time: B.Duration(120000));\n __1 = GoogleCalendar.get_next_event();\n __2 = OpenAI.get_summary_of(text: __1.description);\n __3 = BardeenCommons.get_string_concatenating_strings(strings: ["Your next event is:", __1.summary, "at", __1.startTime, "Here is a summary of the event:", __2]);\n Slack.send_message(message: __3, recipient: recipient);\n}' ```

Initial accuracy was about 10%, which was pretty meh TBH. With a lot of tweaking and tuning we were able to get it to 70%. This means that it takes about 2-3 attempts to get it to generate what's expected.

The great thing is that we only use AI to generate the DSL description of the automation and let the user tweak and tune it. Once it's there we just execute it with our engine.


nice, pretty impressive to go from 10% to 70%. Do you also do autogpt style looping with reasking and verification until proper DSL is creating?


Not yet because honestly we were able to get to where we needed in terms of accuracy without it. Creating a feedback loop and turning into an agent-style interaction can be helpful for more complex automations, but honestly, you can get a lot of mileage out of what we have. Having said that, agents look really fascinating and some demos I saw are mind blowing, so we will definitely look into it very closely.


It's Artem, Co-Founder of Bardeen here. We launched Magic Box today and I wanted to share it with you.

It takes automation descriptions in English and turns them into in-browser playbooks that can either be launched via shortcut or triggered by an external event (such as an email arriving).

I think it's pretty cool and it's ready for anyone to try it out (no waitlist!). We currently have about 30 productivity apps integrated. Please give it a shot and let us know what you think.

Here is a Twitter thread if you're not in the mood for Youtube https://twitter.com/bardeenai/status/1648355825537421314

Here is Bardeen installation link https://chrome.google.com/webstore/detail/bardeen-automate-m...


What was the most interesting thing that you learned while implementing the WAL? Have you thought about how WAL is going to work in the multi-master setup?


We write to WAL and then register the transaction in the transaction sequence registry. If a concurrent transaction registered between the start and the end of the transaction, we update the current uncommitted transaction data with concurrent transactions and re-try registering it in the sequencer again. To scale to multi-master we will move the transaction sequence registry to a service with a consensus algorithm.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: