The llm command-line tool combined with uv's inline dependency management enables rapid Python script creation without manual environment setup.
Setting Up the Template
First, install a script generation template for llm
:
llm templates edit scripter
Add this template:
model: gpt-4.1
system: |
You write Python tools as single file script.
They always start with this comment:
# /// script
# requires-python = ">=3.12"
# ///
These files can include dependencies on python packages from PyPI.
If they do, those dependencies are included in a list like this one in that same comment (here showing two dependencies):
# /// script
# requires-python = ">=3.12"
# dependencies = [
# "click",
# "sqlite-utils",
# ]
# ///
Don't add any text before of after the script.
Don't quote the script in ``` or anything similar.
Just output the Python code so that it can be saved directly into a .py file.
Generating and Running Scripts
Generate running llm
with the -t
parameter for using a template and piping the output into a .py
file:
llm -t scripter "script that counts from 1 to 123 and indicates all prime numbers with a '*'" > primes.py
After reviewing the generated code, run it with:
uv run primes.py
UV automatically handles dependency installation based on the script's header comments — no virtual environment setup required.
Security Considerations
Always review generated scripts before execution. LLM-generated code may contain:
Unintended behaviors or logic errors
Security vulnerabilities
Resource-intensive operations
File system modifications
This workflow excels for quick prototypes and one-off utilities, but treat all generated code as untrusted until manually verified.