Literally Everything

Overview
 

Using OpenAI's's API I've made it possible to literally import anything "from everything"! Any function you can imagine, dynamically generated at runtime, accessible with a simple import. When you import <anything> from everything, my project uses Python's AST to scan your source, and find all usages of <anything>. It then will merge a few lines of context on both sides of every function call, along with the call itself. Then, it will use OpenAI's gpt-4o model to generate a Python function, which you can then use in your code.
personal

I've Done Everything

Last Tuesday, on July 2nd, 2024, I implemented everything in python. There is literally nothing left to implement, I've now done it all.

Here is the Github and the PyPi

And here's a real world demo...

1from everything import sort_list, stylized_greeting 2 3# Print a greeting for Wolf 4print(stylized_greeting("Wolf", "Angry")) 5 6# Sort a list 7print(sort_list([3, 2, 1, 0, -5, 2.5]))
1>> pip install dothething 2>> OPENAI_API_TOKEN=... 3>> python example.py 4WHAT DO YOU WANT, WOLF?! 5[-5, 0, 1, 2, 2.5, 3]

It is, in a word, everything.

Inspiration

The idea hit out of the blue, quite literally, in a conversation with a friend about abstractions for parallel computing. It hit me, why not be able to use your own abstractions created on a whim at runtime?

The idea!
The idea!

How's it work?

When you import <anything> from everything, dothething will use Python's AST library to scan your source code, and find all usages of <anything>. I've defined methods that will search for all usages of given functions, and can then grab parts of the code. Python's AST library is really nice because it provides line numbers with references to the exact location in the source code of different elements. It then will merge a few lines of context on both sides of every function call, along with the call itself. Then, it will use OpenAI's gpt-4o model to generate a Python function, which you can then use in your code.

Everything CLI
Everything CLI

It also works in a repl, and scans the history of the interactive shell to see if there's any relevant context. This part is a bit glitchy, and is probably shell and terminal dependent to some extent. It's using python's readline library to get access to shell environment stuff.

Execs

To actually bring the LLM code into the codebase, I'm using exec and a context, where I dump the function into a localized environment and then wrap the function, and call the resulting function with the wrapper. Here's where I actually inject the LLM code...

1function_name = re.findall(r"def (\w[\w\d]+)\(", generated_function.split("\n")[0]) 2 function_name = function_name[0] if function_name is not None else "error" 3 _LOGGER.debug(f"Function generated had name {function_name}") 4 5 def the_thing(*args, **kwargs): 6 exec(generated_function) 7 context = locals() 8 exec(f"result = {function_name}(*args,**kwargs)", context) 9 return context["result"]

Dynamic Imports

To actually handle being able to import anything, under the hood I've modified the module's __getattr__, so that it can create the function when it is imported. My __init__.py for the module looks something like this...

1# __init__.py 2"""Top-level package for everything.""" 3 4__author__ = """Wolf Mermelstein""" 5__email__ = "wolfmermelstein@gmail.com" 6__version__ = "0.1.0" 7 8 9def __getattr__(name: str): 10 from .makethething import make_the_thing 11 12 return make_the_thing(name)

__init__.py files are files that are a directive to python that a given folder is a module, and they are run when you load the module.

Because of how module caching works, if you import the same name in different places in your project, it will only generate the function once and then will reuse it multiple times.

pep 562
pep 562

But, as it turns out, this is a relatively new feature that was added in PEP562 (PEP = project enhancement protocol). Previously, you had to rely on a hack that the creator of Python himself suggested, overwriting the module with a class that overrides the __getattr__.

1"""Top-level package for everything.""" 2 3__author__ = """Wolf Mermelstein""" 4__email__ = "wolfmermelstein@gmail.com" 5__version__ = "0.1.0" 6 7 8from typing import Callable 9from everything.generator import runtime_generate_function 10import sys 11 12class Everything: 13 def __getattr__(name: str) -> Callable: 14 return runtime_generate_function(name) 15 16sys.modules[__name__] = Everything()

Next Steps

Now that it's working, I've had the idea to take it to the next level by adding a compilation step. Basically, I want to be able to import everything throughout my codebase, and even submodules like everything.finance.irr, and have my program build a library for you based on the way you use it.

The idea is that you can use a library that doesn't exist as you want, and then everything can "poof" it into existence by running something like everything build. It would generate a everything folder with all the content in it, so you could modify the code if necessary (but that's against the spirit of the library).

Improvements

There's a lot of open questions on what I'd need to do to get this to work, and how I can make it better. For one, I want to figure out how to give OpenAi generated functions access to arbitrary Python libraries on PyPi. I also want to be able to lint and verify the code, and black the code to format it.

Naming

So, I was thinking, wouldn't it be so awesome if you literally could pip install everything? Currently it's dothething on PyPi, which is kinda lame.

I looked into it, and it appears that everything was not taken on PyPi! However, it's reserved, probably to prevent name hoarding or abuse. I did a lot of research on this, and found out about PEP 541, which is an approved Python PEP (project enhancement proposal) for reclaiming unused or unmaintained namespaces on PyPi. I made a request here, as a github issue, which is the method they suggest, and am currently waiting to hear back.

The guy!
The guy!
Reaching out
Reaching out

A bit more research scanning similar requests though, it doesn't seem like PyPi really handles 541 requests anymore. The requests pile up, but only the admins can actually resolve them. This led me on a search to try to find an admin that I might be able to contact directly to get the name everything. After a bit of searching, I found Ee, the head of infrastructure for the Python Foundation, and learned that he was a Recurse Center alum too. I reached out to him to see if he'd be able to help, and current am waiting for a response.