In the least surprising turn of events, companies that were saying is a dangerous threat to humanity that needs to be regulated are now complaining about the proposed regulations.
The rules do seem somewhat based on science fiction concerns.
The proposed rules include
- annual tests that certify the AI model is “safe”
- required reporting of “safety incidents” to the government
- a “kill switch” to turn off the AI system in case it goes rogue
- Fines up to 30% of model development costs