These things are interesting for two reasons (to me).
The first is that it seems utterly unsurprising that these inconsistencies exist. These are language models. People seem to fall easily into the trap in believing them to have any kind of “programming” on logic.
The second is just how unscientific NN or ML is. This is why it’s hard to study ML as a science. The original paper referenced doesn’t really explain the issue or explain how to fix it because there’s not much you can do to explain ML(see their second paragraph in the discussion). It’s not like the derivation of a formula where you point to one component of the formula as say “this is where you go wrong”.
…are you serious?
There would be so much data in understanding people’s light usage. For example, you could figure out how late or early people get up, number of people living in a house, how crowded the house is, how many lights are used per room, etc etc. it would be a gold mine of information.
Let’s say you’re a home automaton designer. You want to design devices to be used in the home, but in order to design such devices, you need enough of a stockpile of user data. This lightbulb data would be incredible valuable.
You can probably even analyse the data and determine things like whether someone is watching tv late at night.
From a nefarious view, how valuable would this data be to robbers and thieves?