So let’s say an AI achieves sentience. It’s self-aware now and can make decisions about what it wants to do. Assuming a corporation created it, would it be a worker? It would be doing work and creating value for a capitalist.
Would it still be the means of production, since it is technically a machine, even if it has feelings and desires?
It can’t legally own anything, so I don’t see how it could be bourgeoisie.
Or would it fit a novel category?
how would we even know if an AI was sentient? how do we know that WE are sentient? are nonhuman animals sentient, and if so are they proletariat or means of production? what about insects and worms? bacteria? viruses? nations? crowds on the street? economic systems? concepts? archetypes? if sentience is physical, how many neurons (or other hardware) and in what configuration creates sentience? if sentience is pure information-theoretical, which information is sentient and which is not? is a rock sentient? is the concept of the number 2 sentient? are quantum particles/waves/probability fields sentient? does sentience depend on determinism or nondeterminism? if we made a perfect simulation of your brain, would it be ethical to simulate torturing it? if i scanned your brain, killed your ‘original body’, and uploaded the brain scan into a computer that could perfectly simulate your brain based on the complete physical data of its structure, would the ‘real you’ ‘wake up’ inside the computer, would it be a new person entirely, or would it be just more lifeless code like a complex video game character or a simulation of the solar system? if we made a perfect instant clone of your brain and body and synchronized the brain states somehow, would both bodies share the same ‘perspective’ or would they still be distinct (though very similar and coordinated) people? or perhaps a new single ‘person’ with two bodies?