GPT-4 debate continues – the call for a 6 month moratorium on making AI progress beyond GPT-4 is a terrible idea says Andrew Ng, Co-Founder of Coursera; Stanford CS adjunct faculty. Former head of Baidu AI Group/Google Brain.
In a series of twitter post, Andrew Ng wrote: I’m seeing many new applications in education, healthcare, food, … that’ll help many people. Improving GPT-4 will help. Lets balance the huge value AI is creating vs. realistic risks.
There is no realistic way to implement a moratorium and stop all teams from scaling up LLMs, unless governments step in. Having governments pause emerging technologies they don’t understand is anti-competitive, sets a terrible precedent, and is awful innovation policy, Andrew adds.
Furthermore he states, responsible AI is important, and AI has risks. The popular press narrative that AI companies are running amok shipping unsafe code is just not true. The vast majority (sadly, not all) of AI teams take responsible AI and safety seriously.
A 6 month GPT-4 moratorium is not a practical proposal – AI Expert Andrew Ng
A 6 month moratorium on GPT-4 is not a practical proposal. To advance AI safety, regulations around transparency and auditing would be more practical and make a bigger difference. Let’s also invest more in safety while we advance the technology, rather than stifle progress, Ng added.
Debating the theory of ChatGPT moratorium by Andrew Ng a twitter user wrote – Moratoriums always worry me as they often end up as naive signaling that, even though well-intentioned, makes more nuanced and constructive conversations more difficult to have.
Another twitter user explains – Although the 6 months moratorium is not a great idea, I definitely think there are more cons to be dealt with in short term wrt rapid exponential growth in AIs capabilities.
For starters, people can’t change as quickly as machines grow, the billions losing jobs in developing economies like India will forever leave a scar in our history.
We need better regulations WORD-WIDE (not just US), and also we need to better define future career/academic goals for humans for next 1 to 2 decades. Because we like it or not, world has already changed, and its been 14 days ever since.
Here is another view shared by a twitter user about Andrew Ng theory – I’m sorry, but I completely disagree. The amount of disruption and the rapidity of it is a recipe for societal angst, extensional fear. Nowhere in history has there been this disruptive a Technology. We as members of a society deserve a chance to think about, and how we want it to play out. You can’t tell me that six months in the long run is not doable. My brother-in-law and his whole family are graphic designers, layout artists, website, designers. This is going to cause a real pain in the beginning. Sure, they will start to use the tools, but overtime they would likely be out of their business that they spent an entire life building.
While AI has the potential to generate enormous value, it also poses risks. Proposing a six-month moratorium on AI development beyond GPT-4 is not a feasible solution and could be considered anti-competitive, setting a negative precedent. Instead, regulations pertaining to transparency and auditing should be put in place to ensure the responsible and safe development of AI. Simultaneously, there should be an increased investment in safety as the technology advances.