Is this as unnerving as it sounds?

I was watching Andrej Karpathy's excellent "Intro to Large Language Models" just now, and in the "how do they work" section, he explains that while we know exactly how the LLM is trained by iterative updates, we don't understand why certain circuits emerge or why the parameter structures end up the way they do. i.e. there is highly complex emergent learning going on by this optimization of parameter relationships but we don't know how the LLM does it or why. This is apparently a well known problem in the AI space.

To my untrained ear, this sounds like a red flag. It should be fully understood before we go any further.

Here's the video: https://www.youtube.com/watch?v=zjkBMFhNj_g

submitted by /u/reasonablejim2000
[link] [comments]

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top