mædoc's notes

sovereignty requirements for AI

Demis Hassabis on our AI future

Here's another article about AI from people building AI; it seems hopelessly optimistic in the political sense. Specifically, the idea that we’ll build the technology and then it’ll be neutral and then someone just needs to get the politics. Consider for instance that banks only became a thing (technology) because the political will for it existed.

So it’s interesting to think about what is sovereignty mean for an AI system. It doesn't mean having a pile of money to pay Google or whoever. It means real autonomy, or sovereignty over the AI system that they’re using. One of those requirements is of course ability to host the AI model, but that’s only necessary. It’s not sufficient: we also need to have source code for the model. We need to have the data used for training the compute resources for training and perhaps most difficult. We need to have people who understand how the model action how to drive new applications of the model.

Those resources are fairly intensive but are ofc the capital in a capital-labour division right? What if the model was specifically trained to understand its own working and applications? If it could reliably and honestly mediate between the chat and its own functioning, would that enable sovereignty for those w/o a whole AI team or a pile of money?