Show HN: Open-source SDK for creating custom code interpreters with any LLM

mlejva | 64 points

I have something similar[0] but with a different philosophy. Basically, a docker container running and you can execute code against with ability to set timeouts, auto install and uninstall dependencies and a bunch of other cool stuff.

The pain point of all of this is dependencies and making sure someone doesn’t use your infrastructure to DDOS other folks.

0. https://github.com/Jonathan-Adly/AgentRun

jonathan-adly | 13 days ago

i directed the intrepid hackers over here for discussion, but if you'd like some examples of cool stuff people have been doing with it already, have a look at https://interpreter-weekend.devpost.com/ :) (bonus: the catch-up material with a bunch of little snippets to build on that dropped mid-hack; https://noteshare.space/note/clv0f272x1100201mw12skfhf4#xhDF... :3)

yikes_awjeez | 13 days ago

Is there an open source solution which I can self host without paying anything?

E2b is doing great stuff but it is way too expensive and we users do not like that. What is a solution for that?

fdcaps | 13 days ago

Great, happy to see progress in this space! I've built demo for the same thing with same use-case couple day ago. Execute untrusted JS code.

Empowering user apps with code is way to go.

sandruso | 13 days ago

Hey everyone! I'm the CEO of the company that built this SDK.

We're a company called E2B [0]. We're building and open-source [1] secure environments for running untrusted AI-generated code and AI agents. We call these environments sandboxes and they are built on top of micro VM called Firecracker [2]. We specifically decided to use Firecrackers instead of containers because of their security and ability to do snapshots.

You can think of us as giving small cloud computers to LLMs.

We recently created a dedicated SDK for building custom code interpreters in Python or JS/TS. We saw this need after a lot of our users have been adding code execution capabilities to their AI apps with our core SDK [3]. These use cases were often centered around AI data analysis so code interpreter-like behavior made sense

The way our code interpret SDK works is by spawning an E2B sandbox with Jupyter Server. We then communicate with this Jupyter server through Jupyter Kernel messaging protocol [4]. Here's how we added code interpreter to the new Llama-3 models [5].

We don't do any wrapping around LLM, any prompting, or any agent-like framework. We leave all of that to our users. We're really just a boring code execution layer that sits at the bottom. We're building for the future software that will be building another software.

Our long-term plan is to build an automated AWS for AI apps and agents where AI can build and deploy its own software while giving developers powerful observability into what's happening inside our sandboxes. With everything being open-source.

Happy to answer any questions and hear feedback!

[0] https://e2b.dev/

[1] https://github.com/e2b-dev

[2] https://github.com/firecracker-microvm/firecracker

[3] https://e2b.dev/docs

[4] https://jupyter-client.readthedocs.io/en/latest/messaging.ht...

[5] https://github.com/e2b-dev/e2b-cookbook/blob/main/examples/l...

mlejva | 13 days ago

Awesome!

jamesmurdza | 13 days ago

Very cool!

jurajmasar | 13 days ago