Is agentic AI governance even a computationally bounded process?

Wrt to context drifting, goal misalignment, etc.

Is it possible that a Turing machine could, in theory, handle all of the known issues wrt governance? Or is it a case where (say) 90% of the issues could be handled by a strict governance process, but this last 10% of issues are basically impossible to predict and govern?

Or, as Rumsfeld said, are there are unknown unknowns, the ones we don't know we don't know, which can never be anticipated/predicted/etc?

submitted by /u/Im_Talking
[link] [comments]

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top