Sign In  |  Register  |  About Mill Valley  |  Contact Us

Mill Valley, CA
September 01, 2020 1:29pm
7-Day Forecast | Traffic
  • Search Hotels in Mill Valley

  • CHECK-IN:
  • CHECK-OUT:
  • ROOMS:

AI is the next front in the culture war

Artificial intelligence has become the latest battleground between left and right. It's not like social media. It's an innovation that needs to be handled in each industry.

AI's breakthrough into popular culture, marked by chatbot tools like ChatGPT, has turned this technology into a battleground for culture warriors. However, equating artificial intelligence or AI with social media platforms could cost us significant advances in healthcare, transportation and global leadership in technology.

Over the past decade, politicians have developed a playbook for scoring political points by criticizing social media. Democrats have focused on the spread of misinformation and disinformation, while Republicans have raised concerns about perceived bias against conservative views.

Both sides seek to "work the refs" to weaponize social media platforms against their political opponents.

AI BABIES: NEW TECHNOLOGY IS HELPING FERTILITY DOCS CHOOSE THE BEST EMBRYOS FOR IVF

When ChatGPT was first released, culture warriors mobilized, competing with each other to trick the bot into misbehaving. Critics on the right have claimed that it has a liberal bias. One user asked ChatGPT to create a poem admiring Donald Trump; the bot refused, yet was able to create a poem praising President Biden.

On the other hand, ChatGPT has also been accused by those on the left of being biased against minorities, sexist and non-inclusive. For example, one researcher asked ChatGPT how one might identify good scientists given only demographic information. The bot’s response was to screen for white males.

In many of these cases, ChatGPT results say more about the person asking the question than it does about the underlying machine learning. It is a tool that strings together patterns of words that make sense statistically, but may or may not make sense logically or factually. AI chatbots riff. They do not reflect.

But all this is missing the point:

ChatGPT is AI, but all AI is not ChatGPT. In fact, most AI is not ChatGPT. And AI and ChatGPT are not social media. Trying to simplify it all into one categorically "dangerous" technology will stifle our economy and halt innovation.

We need the right understanding of AI technology to make sense of the risks and benefits. 

Congress has yet to get that understanding. In May of this year, the Senate Judiciary Committee hosted OpenAI CEO Sam Altman and others in the first big AI hearing since the release of ChatGPT. And congress brought its social media playbook.

Sen. Judiciary Chair Richard Blumenthal, D-CT, in his opening remarks expressed concerns about "weaponized disinformation, housing discrimination, the harassment of women and impersonation fraud, voice cloning, and deep fakes." and noted that Congress "had the same choice when we faced social media. We failed to seize that moment."

Missouri Republican Sen. Josh Hawley worried about election manipulation and targeted advertising business models driving tech addiction. Illinois Democrat Sen. Dick Durbin built on Blumenthal’s parallels to social media by criticizing the liability regime around social media platforms and asking how AI companies should be liable for how their tools are used.

Others raised issues of intellectual property, election misinformation, privacy and data security. All issues that Congress has raised over the years about social media.

AI is not another social media platform, and it is much bigger than the AI chatbots that have made such a splash. ChatGPT uses recent breakthrough technologies to identify a web of patterns in human language, and then applies that understanding to create a chat-like interface. But the underlying machine learning technology that facilitates automatic pattern recognition given large amounts of raw data is already in widespread use.

CLICK HERE TO GET THE OPINION NEWSLETTER

In fact, similar tech already powers the voice recognition and text dictation such as that used by Alexa devices and Siri or the face-recognition technology that you use to unlock your iPhone. More powerful versions of those techniques are delivering huge benefits in medical imaging, drug discovery, accident collision and traffic management, personalized education and more.

Regulating AI to deal with the risks of chatbots is like regulating all uses of metal to deal with car accidents.

That's why we need a sector-specific, application-focused approach to AI. Just as we have different rules for steel hammers, titanium hip replacements, iron girders and aluminum planes, our rules for AI ought to focus on the applications.

We already have piles of rules for many different industries where AI will be applied. Government should identify what rules already apply to specific AI applications. If there are gaps, those should be filled. And importantly, if there are existing rules that are unnecessarily preventing the use of AI in an industry or application, this should be revised or removed.

This is difficult, lower-profile work. But it is crucial for maximizing AI’s benefits. If culture warriors continue to conscript AI companies into the culture war, we risk losing sight of the broader societal benefits that AI offers. The price of waging this culture war would be missing the much bigger opportunity to help everyone live safer, healthier, more productive lives. 

CLICK HERE TO READ MORE FROM NEIL CHILSON

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.
 
 
Copyright © 2010-2020 MillValley.com & California Media Partners, LLC. All rights reserved.