Shelly Palmer is the Professor of Advanced Media in Residence at Syracuse University’s S.I. Newhouse School of Public Communications and CEO of The Palmer Group, a consulting practice that helps Fortune 500 companies with technology, media and marketing. Named LinkedIn’s “Top Voice in Technology,” he covers tech and business for Good Day New York, is a regular commentator on CNN and writes a popular daily business blog. He’s a bestselling author, and the creator of the popular, free online course, Generative AI for Execs.
Follow @shellypalmer or visit shellypalmer.com
What do you see as the primary hurdles or challenges hindering international collaboration in AI regulation?
The problem with AI regulation is that you’re trying to align AI with human values. This may sound easy, except it’s difficult to find groups of humans that are aligned (and agree upon) human values, so there is no standard to use to align AI models.
AI’s large language models (LLMs) are trained on the public web; the training is a reflection of the world we actually live in, not the world we would like to live in. Regulating that will be about as easy and implementable as establishing a world government.
We also need to start looking at the problem a different way: people are fixated on the LLMs — they should be fixated on data – data sovereignty and data privacy are fundamental to any form of AI regulation.
How do you strike a balance between establishing global standards for AI regulation while considering diverse cultural, ethical, and societal differences across nations?
Finding that balance is a challenge. For example, in China, President Xi Jinping has said that all LLMs created there must comply with proper socialist values. That is not going to work in other parts of the world. Approximately 50 percent of the public web is written in English. However, only 20 percent of the global population speaks English. Our realities and our cultures are both shaped by and a reflection of our languages. Every language has words, phrases, metaphors and verb conjugation that are unique and difficult for professional human interpreters to accurately translate — , how would we hope to cross-culturally train, let alone globally standardize AI.
We also need to consider how gendered language fits into AI. Many of the models are trained on “traditional” roles in history and this varies from country to country, industry to industry.
So, whether we need AI regulation or better AI regulation, we first need to ask ourselves: whose world-view are we going to impose?
“Education is key. Everyone should make it a priority to learn as much as they can about how AI works, how it is trained.”
What is the impact of copyright on reaching global collaboration for the safe delivery of AI?
You look at Google and YouTube – they haven’t paid for the internet content they use, or the videos that get uploaded.
The current generative AI developers have copied everything they can get their hands on and trained their models. It’s no different in my mind than Google copying the whole web and training on it. Generative AI is not searching the private web, nor does it have or access to people’s hard drives. The vast majority of data in the world is on private networks and is encrypted so can’t be used. is unavailable for AI training.
As for IP protection, all All current AI use cases may be fair use. This will be decided by the courts. Interestingly, you can use AI to create content that will infringe on someone’s copyrighted work. However, at the moment, the work you create with AI is not copyrightable in the US. So, we’ve got an interesting set of new copyright issues.
Will AI make us all more productive?
Some people call this a “skills democratiser” but that is patently wrong; it’s a “skills amplifier.” The more you know, the more subject matter expertise you have then the more powerful you are with these tools. The better you know how to ask for information the more success you are going to have.
If AI is more regulated who will be the winners and losers?
Regulating AI is an intractable problem. And it can’t all be lumped into one bucket. AI can be used to enhance productivity, it can also be used as a transformation layer allowing you to “talk directly to your data,” and it can also be used to create new products.
Each of these areas requires a different kind of regulation and that is going to be a challenge. One of the key concerns is that the people who deeply understand these tools and how to build them will become more rarefied and more powerful. Those who attempt to regulate it will be welcomed with open arms because regulatory capture will be the direct path to the big players getting bigger and the smaller ones staying small.
What can we learn from other attempts to regulate seemingly unwieldy areas of tech-based adoption – such as social media and crypto currency?
The world was not designed, it has evolved. While most people don’t want to see AI surface abusive or illegal content, most people would not want to live in a world where a group of people make decisions about what we can and cannot see, or think.
I don’t believe it is AI that can be properly regulated – it is actually data privacy and data sovereignty which can and should be regulated. If we had proper data policies, AI would naturally follow because the types of AI most people are trying to regulate is trained on data. Education is key. Everyone should make it a priority to learn as much as they can about how AI works, how it is trained, and what is actually happening when you use apps like ChatGPT. A solid understanding of the problem set will go a long way towards finding answers that benefit us all.