7 Comments
User's avatar
Chris Samp's avatar

Last few days I’ve had a long running local ai project going - identifying and extracting details from about 60k family photos, screenshots and other accumulated detritus. Claude wrote the code for me in detailed sessions. I am an experienced dev and have a lot of opinions.

While the process is running in the background I’ve been interrogating Claude about metrics, optimizations, trying different iterational improvements.

But maybe my intent is closer to your description.

Instead of “Claude, parallelize loading images to better utilize the gpu” I could mean “file loader, grab several images at once to give to the gpu.”

I’ll think about that this afternoon. Maybe I can name the different components and see if I can address them as entities instead of lifeless blocks.

Rainbow Roxy's avatar

Thanks for this, it realy clarifies a lot. But how will we manage debugging and versioning in such a fluid, object-centric environment?

Pierre Gallet's avatar

Fascinating concept. To a non coder like myself, help me to understand- is an LLM embedded in each object? If so, I imagine fine tuned SLM’s trained on each objects specifics being available for use. But quite possibly I completely misunderstood the gist of this idea?..

Eleanor Berger's avatar

Yes, each object is an agent and has its own LLM thread. I haven't gotten as far as to experiment with a fine-tuned SLM. Also, size is relative - how small does it have to be before it stops being sufficiently intelligent? Definitely worth investigating though. The current implementation with GPT-5-mini is quite slow.

Pierre Gallet's avatar

Things are only getting faster. So slow today, is a non issue tomorrow I’d say. But, yes, optimising with fine tuned models likely a best practise anyway. Is there anywhere I can test this environment you’ve built out? Curious to see it in action.