malada: Canadian flag text I stand with Canada (Default)
malada ([personal profile] malada) wrote2025-02-11 07:40 am
Entry tags:

LLM (AI) and the sampling problem

There's an old saying in computing: garbage in, garbage out. Most AIs are simply sampling programs with fancy algorithms attached to re-stitch things back together. Right now, the Large Language Models have thousands of years of human experience and culture to mess with. Then they can generate far greater quantities of 'product' in minuted than any poet, painter or musician could several lifetimes.

The problem I see is what happens when they run out of human input and start sampling 'AI' 'product'. It has to start happening soon as places like the internet is being flooded with 'AI' content. With deep fake audio, video, images being created - how is the 'AI' supposed to distinguish between human and artificial 'product'? Will there be a feedback loop until the only thing left will be 'AI' product being sampled by other 'AI's to create more 'AI' product? Will everything soon degenerate into the 'gray ooze' of 'product'?

Garbage in, garbage out.

Post a comment in response:

This account has disabled anonymous posting.
If you don't have an account you can create one now.
HTML doesn't work in the subject.
More info about formatting