The Biden administration just released an executive order on October 30th, 2023 regarding the governance of artificial intelligence. I applaud the effort the white house is making towards regulating the AI industry and protecting the general public regarding privacy and and safety concerns. These efforts focuses on national security issues regarding biological, chemical, and nuclear weapons, specifically associated with biotechnology and cybersecurity. It calls on the respective federal agencies to draft plans in the next 90-270 days to regulate companies who are involved in the AI industry.
There will be a lot of good things coming out of this executive order. But as human nature dictates, it’s just easier to criticize than to praise the positives. So here is what I think that needs improvement:
- My major concern is that this document will prevent the entry of small companies to the large language model (LLM) market. Early in the document, one of the principle for the guideline was to promote competition, especially from the smaller companies. But later in the document, it calls on the agencies to draft guidelines that require companies to report the development LLMs over 10 billion parameters. The intention is to keep track of the models that can be used for many purposes, including those that pose national security risks. However this requirement inadvertently impede the development of models by small companies because they wouldn’t have the ability to generate the documentation required by these agencies. This 10 billion parameter limit means the latest llama open source models is included in the reporting requirements, including companies that simply possessing these models. This mean the making of large models are left in the hands of the tech companies like Google, Facebook, Microsoft, etc. This is not what the general public like to see.
- Another thing I see is that some of the guidelines and policies are difficult to enforce. For examples, one of the principles is reducing the effect of AI on general workforce. It is a well intended principle. But in the practice, profit seeking nature of companies means that they will use the power of AI to reduce workforce spending. Even though many companies talked about “augmentation” of workers, it’s a fact that the worker count have been reduced because of AI implementations. The reality is that for people who cannot or unwilling to learn about AI technologies, they will be impacted by workforce reduction in the future. And this future could be sooner than people think.
- It’s late. Regarding privacy laws, we are behind Europe on GDPR requirements. Some states have their privacy laws, such as California, but we still don’t have unified national requirements. For comparison, China started to mandate online commerce companies to stop using consumer data to track and target consumers with the sole purpose of maximizing profit. One example it to prevent the practice of giving new customers better deals while “ignoring” old customers. Or selectively give people worse deals because the companies knows that they will purchase such items regardless.